Test Report: QEMU_macOS 19195

                    
                      3c49d247522650dad7be9dd4f792820e054aa6e4:2024-07-08:35243
                    
                

Test fail (101/279)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 11.37
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 10.2
46 TestCertOptions 10.05
47 TestCertExpiration 195.21
48 TestDockerFlags 10.1
49 TestForceSystemdFlag 10.38
50 TestForceSystemdEnv 10.05
95 TestFunctional/parallel/ServiceCmdConnect 37.19
160 TestMultiControlPlane/serial/StartCluster 79.01
161 TestMultiControlPlane/serial/DeployApp 74.46
162 TestMultiControlPlane/serial/PingHostFromPods 0.13
163 TestMultiControlPlane/serial/AddWorkerNode 0.14
164 TestMultiControlPlane/serial/NodeLabels 0.1
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.14
166 TestMultiControlPlane/serial/CopyFile 0.14
167 TestMultiControlPlane/serial/StopSecondaryNode 0.19
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.15
169 TestMultiControlPlane/serial/RestartSecondaryNode 53.15
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.14
172 TestMultiControlPlane/serial/DeleteSecondaryNode 1
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.85
174 TestMultiControlPlane/serial/StopCluster 9.31
175 TestMultiControlPlane/serial/RestartCluster 104.12
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 1.07
177 TestMultiControlPlane/serial/AddSecondaryNode 355.87
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 14.6
183 TestImageBuild/serial/BuildWithBuildArg 0.96
221 TestMountStart/serial/StartWithMountFirst 10.31
224 TestMultiNode/serial/FreshStart2Nodes 9.97
225 TestMultiNode/serial/DeployApp2Nodes 111.79
226 TestMultiNode/serial/PingHostFrom2Pods 0.09
227 TestMultiNode/serial/AddNode 0.07
228 TestMultiNode/serial/MultiNodeLabels 0.06
229 TestMultiNode/serial/ProfileList 0.07
230 TestMultiNode/serial/CopyFile 0.06
231 TestMultiNode/serial/StopNode 0.14
232 TestMultiNode/serial/StartAfterStop 51.94
233 TestMultiNode/serial/RestartKeepsNodes 8.67
234 TestMultiNode/serial/DeleteNode 0.1
235 TestMultiNode/serial/StopMultiNode 3.14
236 TestMultiNode/serial/RestartMultiNode 5.25
237 TestMultiNode/serial/ValidateNameConflict 20.13
241 TestPreload 10.09
243 TestScheduledStopUnix 10
244 TestSkaffold 12.78
247 TestRunningBinaryUpgrade 599.87
249 TestKubernetesUpgrade 18.45
262 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.3
263 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.03
265 TestStoppedBinaryUpgrade/Upgrade 573.08
267 TestPause/serial/Start 10.08
277 TestNoKubernetes/serial/StartWithK8s 10.11
278 TestNoKubernetes/serial/StartWithStopK8s 5.29
279 TestNoKubernetes/serial/Start 5.27
283 TestNoKubernetes/serial/StartNoArgs 5.31
285 TestNetworkPlugins/group/auto/Start 9.74
286 TestNetworkPlugins/group/kindnet/Start 9.91
287 TestNetworkPlugins/group/flannel/Start 9.76
288 TestNetworkPlugins/group/enable-default-cni/Start 9.88
289 TestNetworkPlugins/group/bridge/Start 9.81
290 TestNetworkPlugins/group/kubenet/Start 9.75
291 TestNetworkPlugins/group/custom-flannel/Start 9.84
292 TestNetworkPlugins/group/calico/Start 9.76
293 TestNetworkPlugins/group/false/Start 9.82
296 TestStartStop/group/old-k8s-version/serial/FirstStart 9.71
297 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
298 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
301 TestStartStop/group/old-k8s-version/serial/SecondStart 5.24
302 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
303 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
304 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
305 TestStartStop/group/old-k8s-version/serial/Pause 0.1
307 TestStartStop/group/no-preload/serial/FirstStart 10.04
309 TestStartStop/group/embed-certs/serial/FirstStart 11.21
310 TestStartStop/group/no-preload/serial/DeployApp 0.1
311 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.13
314 TestStartStop/group/no-preload/serial/SecondStart 6.32
315 TestStartStop/group/embed-certs/serial/DeployApp 0.1
316 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
317 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
318 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
319 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.08
320 TestStartStop/group/no-preload/serial/Pause 0.1
323 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 10.15
325 TestStartStop/group/embed-certs/serial/SecondStart 6.57
326 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
327 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
328 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
329 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
330 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
331 TestStartStop/group/embed-certs/serial/Pause 0.11
334 TestStartStop/group/newest-cni/serial/FirstStart 10.09
336 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 7.1
338 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.04
340 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
342 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.06
343 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
345 TestStartStop/group/newest-cni/serial/SecondStart 5.25
348 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
349 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (11.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-385000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-385000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (11.366243458s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c525407c-9fdf-4b40-b7f7-7203fad4cdf6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-385000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"255344df-9f41-40e1-bb49-05fe83812954","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19195"}}
	{"specversion":"1.0","id":"012ad628-a877-493e-98a7-b58cb4656e8a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig"}}
	{"specversion":"1.0","id":"b2df048a-0eca-41d8-b631-94d9971c480a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"76712747-e06e-4c14-a7be-ea516f36a8ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ab65a111-62fc-4783-94d8-7e386bdfbdf7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube"}}
	{"specversion":"1.0","id":"f19e8bde-ac37-40fa-acb1-d99b16301599","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"a420f448-d413-4f06-87fa-664c430b574a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ad1172da-43d9-4305-94c4-1cff1978a355","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"f6f5a3bb-ce3f-4380-be9a-7dfc68ec69d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"6d01ca10-0b8d-46ba-bb12-69ddb1682b3e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-385000\" primary control-plane node in \"download-only-385000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"fc856afb-b997-422b-8d10-cb78436cd06d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"7e5c06f7-664f-4915-8c8a-8fdbaf5498e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19195-1270/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10491dac0 0x10491dac0 0x10491dac0 0x10491dac0 0x10491dac0 0x10491dac0 0x10491dac0] Decompressors:map[bz2:0x1400000ffa0 gz:0x1400000ffa8 tar:0x1400000ff50 tar.bz2:0x1400000ff60 tar.gz:0x1400000ff70 tar.xz:0x1400000ff80 tar.zst:0x1400000ff90 tbz2:0x1400000ff60 tgz:0x14
00000ff70 txz:0x1400000ff80 tzst:0x1400000ff90 xz:0x1400000ffd0 zip:0x1400000ffe0 zst:0x1400000ffd8] Getters:map[file:0x140008f6b00 http:0x140007c81e0 https:0x140007c8230] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"1e8badd1-5388-4691-a10d-ebacd1ac9bc9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 12:28:20.694736    1769 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:28:20.694898    1769 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:28:20.694901    1769 out.go:304] Setting ErrFile to fd 2...
	I0708 12:28:20.694904    1769 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:28:20.695028    1769 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	W0708 12:28:20.695127    1769 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19195-1270/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19195-1270/.minikube/config/config.json: no such file or directory
	I0708 12:28:20.696409    1769 out.go:298] Setting JSON to true
	I0708 12:28:20.713648    1769 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1668,"bootTime":1720465232,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0708 12:28:20.713716    1769 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0708 12:28:20.718138    1769 out.go:97] [download-only-385000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0708 12:28:20.718287    1769 notify.go:220] Checking for updates...
	W0708 12:28:20.718317    1769 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball: no such file or directory
	I0708 12:28:20.721062    1769 out.go:169] MINIKUBE_LOCATION=19195
	I0708 12:28:20.724129    1769 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 12:28:20.729053    1769 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0708 12:28:20.732096    1769 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 12:28:20.735097    1769 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	W0708 12:28:20.741110    1769 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0708 12:28:20.741356    1769 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 12:28:20.746013    1769 out.go:97] Using the qemu2 driver based on user configuration
	I0708 12:28:20.746032    1769 start.go:297] selected driver: qemu2
	I0708 12:28:20.746045    1769 start.go:901] validating driver "qemu2" against <nil>
	I0708 12:28:20.746127    1769 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0708 12:28:20.747682    1769 out.go:169] Automatically selected the socket_vmnet network
	I0708 12:28:20.753648    1769 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0708 12:28:20.753737    1769 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0708 12:28:20.753807    1769 cni.go:84] Creating CNI manager for ""
	I0708 12:28:20.753823    1769 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0708 12:28:20.753877    1769 start.go:340] cluster config:
	{Name:download-only-385000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-385000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 12:28:20.759099    1769 iso.go:125] acquiring lock: {Name:mk0270d312faa6a295feea241390baaf586d8510 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 12:28:20.763098    1769 out.go:97] Downloading VM boot image ...
	I0708 12:28:20.763125    1769 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/iso/arm64/minikube-v1.33.1-1720011972-19186-arm64.iso
	I0708 12:28:25.393561    1769 out.go:97] Starting "download-only-385000" primary control-plane node in "download-only-385000" cluster
	I0708 12:28:25.393578    1769 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0708 12:28:25.469855    1769 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0708 12:28:25.469884    1769 cache.go:56] Caching tarball of preloaded images
	I0708 12:28:25.470064    1769 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0708 12:28:25.475205    1769 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0708 12:28:25.475214    1769 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0708 12:28:25.554617    1769 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0708 12:28:30.874696    1769 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0708 12:28:30.875205    1769 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0708 12:28:31.570751    1769 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0708 12:28:31.570956    1769 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/download-only-385000/config.json ...
	I0708 12:28:31.570974    1769 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/download-only-385000/config.json: {Name:mk6ade450131b0b9717451de9ef19a570a5c0fec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:28:31.571200    1769 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0708 12:28:31.571381    1769 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0708 12:28:31.986335    1769 out.go:169] 
	W0708 12:28:31.993828    1769 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19195-1270/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10491dac0 0x10491dac0 0x10491dac0 0x10491dac0 0x10491dac0 0x10491dac0 0x10491dac0] Decompressors:map[bz2:0x1400000ffa0 gz:0x1400000ffa8 tar:0x1400000ff50 tar.bz2:0x1400000ff60 tar.gz:0x1400000ff70 tar.xz:0x1400000ff80 tar.zst:0x1400000ff90 tbz2:0x1400000ff60 tgz:0x1400000ff70 txz:0x1400000ff80 tzst:0x1400000ff90 xz:0x1400000ffd0 zip:0x1400000ffe0 zst:0x1400000ffd8] Getters:map[file:0x140008f6b00 http:0x140007c81e0 https:0x140007c8230] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0708 12:28:31.993861    1769 out_reason.go:110] 
	W0708 12:28:32.000687    1769 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 12:28:32.004647    1769 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-385000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (11.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19195-1270/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.2s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-291000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-291000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (10.057903708s)

                                                
                                                
-- stdout --
	* [offline-docker-291000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-291000" primary control-plane node in "offline-docker-291000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-291000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 12:59:38.335450    3623 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:59:38.335576    3623 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:59:38.335579    3623 out.go:304] Setting ErrFile to fd 2...
	I0708 12:59:38.335581    3623 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:59:38.335707    3623 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:59:38.336924    3623 out.go:298] Setting JSON to false
	I0708 12:59:38.354330    3623 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3546,"bootTime":1720465232,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0708 12:59:38.354399    3623 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0708 12:59:38.358537    3623 out.go:177] * [offline-docker-291000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0708 12:59:38.366364    3623 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 12:59:38.366411    3623 notify.go:220] Checking for updates...
	I0708 12:59:38.372332    3623 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 12:59:38.375288    3623 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0708 12:59:38.378384    3623 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 12:59:38.381322    3623 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	I0708 12:59:38.384364    3623 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 12:59:38.392668    3623 config.go:182] Loaded profile config "multinode-969000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:59:38.392736    3623 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 12:59:38.396303    3623 out.go:177] * Using the qemu2 driver based on user configuration
	I0708 12:59:38.403347    3623 start.go:297] selected driver: qemu2
	I0708 12:59:38.403358    3623 start.go:901] validating driver "qemu2" against <nil>
	I0708 12:59:38.403366    3623 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 12:59:38.405282    3623 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0708 12:59:38.408317    3623 out.go:177] * Automatically selected the socket_vmnet network
	I0708 12:59:38.411360    3623 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 12:59:38.411395    3623 cni.go:84] Creating CNI manager for ""
	I0708 12:59:38.411404    3623 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0708 12:59:38.411407    3623 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0708 12:59:38.411447    3623 start.go:340] cluster config:
	{Name:offline-docker-291000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:offline-docker-291000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 12:59:38.415093    3623 iso.go:125] acquiring lock: {Name:mk0270d312faa6a295feea241390baaf586d8510 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 12:59:38.422150    3623 out.go:177] * Starting "offline-docker-291000" primary control-plane node in "offline-docker-291000" cluster
	I0708 12:59:38.426302    3623 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0708 12:59:38.426331    3623 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0708 12:59:38.426339    3623 cache.go:56] Caching tarball of preloaded images
	I0708 12:59:38.426409    3623 preload.go:173] Found /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0708 12:59:38.426414    3623 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0708 12:59:38.426480    3623 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/offline-docker-291000/config.json ...
	I0708 12:59:38.426491    3623 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/offline-docker-291000/config.json: {Name:mk047c303d0ff6f5f8252a7ea8f46f81a62ac79c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:59:38.426771    3623 start.go:360] acquireMachinesLock for offline-docker-291000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 12:59:38.426803    3623 start.go:364] duration metric: took 25.167µs to acquireMachinesLock for "offline-docker-291000"
	I0708 12:59:38.426814    3623 start.go:93] Provisioning new machine with config: &{Name:offline-docker-291000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.2 ClusterName:offline-docker-291000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 12:59:38.426841    3623 start.go:125] createHost starting for "" (driver="qemu2")
	I0708 12:59:38.431165    3623 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0708 12:59:38.446775    3623 start.go:159] libmachine.API.Create for "offline-docker-291000" (driver="qemu2")
	I0708 12:59:38.446807    3623 client.go:168] LocalClient.Create starting
	I0708 12:59:38.446867    3623 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem
	I0708 12:59:38.446901    3623 main.go:141] libmachine: Decoding PEM data...
	I0708 12:59:38.446911    3623 main.go:141] libmachine: Parsing certificate...
	I0708 12:59:38.446951    3623 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem
	I0708 12:59:38.446973    3623 main.go:141] libmachine: Decoding PEM data...
	I0708 12:59:38.446983    3623 main.go:141] libmachine: Parsing certificate...
	I0708 12:59:38.447384    3623 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19195-1270/.minikube/cache/iso/arm64/minikube-v1.33.1-1720011972-19186-arm64.iso...
	I0708 12:59:38.593970    3623 main.go:141] libmachine: Creating SSH key...
	I0708 12:59:38.642406    3623 main.go:141] libmachine: Creating Disk image...
	I0708 12:59:38.642415    3623 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0708 12:59:38.642616    3623 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/offline-docker-291000/disk.qcow2.raw /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/offline-docker-291000/disk.qcow2
	I0708 12:59:38.658223    3623 main.go:141] libmachine: STDOUT: 
	I0708 12:59:38.658240    3623 main.go:141] libmachine: STDERR: 
	I0708 12:59:38.658291    3623 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/offline-docker-291000/disk.qcow2 +20000M
	I0708 12:59:38.669915    3623 main.go:141] libmachine: STDOUT: Image resized.
	
	I0708 12:59:38.669934    3623 main.go:141] libmachine: STDERR: 
	I0708 12:59:38.669954    3623 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/offline-docker-291000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/offline-docker-291000/disk.qcow2
	I0708 12:59:38.669958    3623 main.go:141] libmachine: Starting QEMU VM...
	I0708 12:59:38.669990    3623 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/offline-docker-291000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/offline-docker-291000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/offline-docker-291000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:98:70:1a:e7:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/offline-docker-291000/disk.qcow2
	I0708 12:59:38.671730    3623 main.go:141] libmachine: STDOUT: 
	I0708 12:59:38.671746    3623 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 12:59:38.671769    3623 client.go:171] duration metric: took 224.964417ms to LocalClient.Create
	I0708 12:59:40.673814    3623 start.go:128] duration metric: took 2.247017708s to createHost
	I0708 12:59:40.673840    3623 start.go:83] releasing machines lock for "offline-docker-291000", held for 2.24709625s
	W0708 12:59:40.673857    3623 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 12:59:40.681876    3623 out.go:177] * Deleting "offline-docker-291000" in qemu2 ...
	W0708 12:59:40.691593    3623 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 12:59:40.691607    3623 start.go:728] Will try again in 5 seconds ...
	I0708 12:59:45.693744    3623 start.go:360] acquireMachinesLock for offline-docker-291000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 12:59:45.694302    3623 start.go:364] duration metric: took 399.208µs to acquireMachinesLock for "offline-docker-291000"
	I0708 12:59:45.694429    3623 start.go:93] Provisioning new machine with config: &{Name:offline-docker-291000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.2 ClusterName:offline-docker-291000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 12:59:45.694663    3623 start.go:125] createHost starting for "" (driver="qemu2")
	I0708 12:59:45.704178    3623 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0708 12:59:45.755315    3623 start.go:159] libmachine.API.Create for "offline-docker-291000" (driver="qemu2")
	I0708 12:59:45.755367    3623 client.go:168] LocalClient.Create starting
	I0708 12:59:45.755481    3623 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem
	I0708 12:59:45.755547    3623 main.go:141] libmachine: Decoding PEM data...
	I0708 12:59:45.755565    3623 main.go:141] libmachine: Parsing certificate...
	I0708 12:59:45.755626    3623 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem
	I0708 12:59:45.755669    3623 main.go:141] libmachine: Decoding PEM data...
	I0708 12:59:45.755680    3623 main.go:141] libmachine: Parsing certificate...
	I0708 12:59:45.756242    3623 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19195-1270/.minikube/cache/iso/arm64/minikube-v1.33.1-1720011972-19186-arm64.iso...
	I0708 12:59:45.957392    3623 main.go:141] libmachine: Creating SSH key...
	I0708 12:59:46.300170    3623 main.go:141] libmachine: Creating Disk image...
	I0708 12:59:46.300181    3623 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0708 12:59:46.300418    3623 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/offline-docker-291000/disk.qcow2.raw /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/offline-docker-291000/disk.qcow2
	I0708 12:59:46.310054    3623 main.go:141] libmachine: STDOUT: 
	I0708 12:59:46.310081    3623 main.go:141] libmachine: STDERR: 
	I0708 12:59:46.310136    3623 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/offline-docker-291000/disk.qcow2 +20000M
	I0708 12:59:46.318287    3623 main.go:141] libmachine: STDOUT: Image resized.
	
	I0708 12:59:46.318301    3623 main.go:141] libmachine: STDERR: 
	I0708 12:59:46.318317    3623 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/offline-docker-291000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/offline-docker-291000/disk.qcow2
	I0708 12:59:46.318322    3623 main.go:141] libmachine: Starting QEMU VM...
	I0708 12:59:46.318360    3623 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/offline-docker-291000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/offline-docker-291000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/offline-docker-291000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:7f:31:94:de:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/offline-docker-291000/disk.qcow2
	I0708 12:59:46.319911    3623 main.go:141] libmachine: STDOUT: 
	I0708 12:59:46.319924    3623 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 12:59:46.319936    3623 client.go:171] duration metric: took 564.578459ms to LocalClient.Create
	I0708 12:59:48.322095    3623 start.go:128] duration metric: took 2.627485292s to createHost
	I0708 12:59:48.322185    3623 start.go:83] releasing machines lock for "offline-docker-291000", held for 2.627901083s
	W0708 12:59:48.322352    3623 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-291000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-291000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 12:59:48.330789    3623 out.go:177] 
	W0708 12:59:48.334786    3623 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0708 12:59:48.334813    3623 out.go:239] * 
	* 
	W0708 12:59:48.337576    3623 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 12:59:48.354822    3623 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-291000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-07-08 12:59:48.365399 -0700 PDT m=+1887.818789210
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-291000 -n offline-docker-291000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-291000 -n offline-docker-291000: exit status 7 (63.521959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-291000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-291000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-291000
--- FAIL: TestOffline (10.20s)

                                                
                                    
x
+
TestCertOptions (10.05s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-750000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-750000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.789577167s)

                                                
                                                
-- stdout --
	* [cert-options-750000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-750000" primary control-plane node in "cert-options-750000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-750000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-750000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-750000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-750000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-750000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (77.845208ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-750000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-750000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-750000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-750000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-750000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (41.433334ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-750000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-750000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-750000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-07-08 13:00:18.606221 -0700 PDT m=+1918.060475168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-750000 -n cert-options-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-750000 -n cert-options-750000: exit status 7 (29.136541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-750000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-750000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-750000
--- FAIL: TestCertOptions (10.05s)

                                                
                                    
x
+
TestCertExpiration (195.21s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-546000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-546000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.885183291s)

                                                
                                                
-- stdout --
	* [cert-expiration-546000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-546000" primary control-plane node in "cert-expiration-546000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-546000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-546000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-546000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-546000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-546000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.184567792s)

                                                
                                                
-- stdout --
	* [cert-expiration-546000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-546000" primary control-plane node in "cert-expiration-546000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-546000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-546000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-546000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-546000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-546000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-546000" primary control-plane node in "cert-expiration-546000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-546000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-546000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-546000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-07-08 13:03:18.57581 -0700 PDT m=+2098.035206876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-546000 -n cert-expiration-546000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-546000 -n cert-expiration-546000: exit status 7 (62.389791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-546000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-546000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-546000
--- FAIL: TestCertExpiration (195.21s)

                                                
                                    
x
+
TestDockerFlags (10.1s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-537000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-537000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.86741275s)

                                                
                                                
-- stdout --
	* [docker-flags-537000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-537000" primary control-plane node in "docker-flags-537000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-537000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 12:59:58.586737    3816 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:59:58.586868    3816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:59:58.586871    3816 out.go:304] Setting ErrFile to fd 2...
	I0708 12:59:58.586876    3816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:59:58.587008    3816 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:59:58.588095    3816 out.go:298] Setting JSON to false
	I0708 12:59:58.603985    3816 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3566,"bootTime":1720465232,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0708 12:59:58.604062    3816 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0708 12:59:58.608348    3816 out.go:177] * [docker-flags-537000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0708 12:59:58.616754    3816 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 12:59:58.616793    3816 notify.go:220] Checking for updates...
	I0708 12:59:58.622706    3816 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 12:59:58.625743    3816 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0708 12:59:58.628705    3816 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 12:59:58.631716    3816 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	I0708 12:59:58.634742    3816 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 12:59:58.636427    3816 config.go:182] Loaded profile config "force-systemd-flag-803000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:59:58.636494    3816 config.go:182] Loaded profile config "multinode-969000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:59:58.636544    3816 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 12:59:58.640650    3816 out.go:177] * Using the qemu2 driver based on user configuration
	I0708 12:59:58.647540    3816 start.go:297] selected driver: qemu2
	I0708 12:59:58.647547    3816 start.go:901] validating driver "qemu2" against <nil>
	I0708 12:59:58.647556    3816 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 12:59:58.649960    3816 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0708 12:59:58.652702    3816 out.go:177] * Automatically selected the socket_vmnet network
	I0708 12:59:58.655822    3816 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0708 12:59:58.655856    3816 cni.go:84] Creating CNI manager for ""
	I0708 12:59:58.655863    3816 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0708 12:59:58.655867    3816 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0708 12:59:58.655896    3816 start.go:340] cluster config:
	{Name:docker-flags-537000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:docker-flags-537000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 12:59:58.659712    3816 iso.go:125] acquiring lock: {Name:mk0270d312faa6a295feea241390baaf586d8510 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 12:59:58.667736    3816 out.go:177] * Starting "docker-flags-537000" primary control-plane node in "docker-flags-537000" cluster
	I0708 12:59:58.671746    3816 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0708 12:59:58.671760    3816 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0708 12:59:58.671771    3816 cache.go:56] Caching tarball of preloaded images
	I0708 12:59:58.671827    3816 preload.go:173] Found /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0708 12:59:58.671835    3816 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0708 12:59:58.671903    3816 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/docker-flags-537000/config.json ...
	I0708 12:59:58.671915    3816 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/docker-flags-537000/config.json: {Name:mk0876bf2a2d9baadd2cf0f665f627df376e9c7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:59:58.672262    3816 start.go:360] acquireMachinesLock for docker-flags-537000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 12:59:58.672301    3816 start.go:364] duration metric: took 29.833µs to acquireMachinesLock for "docker-flags-537000"
	I0708 12:59:58.672311    3816 start.go:93] Provisioning new machine with config: &{Name:docker-flags-537000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:docker-flags-537000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 12:59:58.672341    3816 start.go:125] createHost starting for "" (driver="qemu2")
	I0708 12:59:58.676696    3816 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0708 12:59:58.694201    3816 start.go:159] libmachine.API.Create for "docker-flags-537000" (driver="qemu2")
	I0708 12:59:58.694229    3816 client.go:168] LocalClient.Create starting
	I0708 12:59:58.694288    3816 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem
	I0708 12:59:58.694318    3816 main.go:141] libmachine: Decoding PEM data...
	I0708 12:59:58.694327    3816 main.go:141] libmachine: Parsing certificate...
	I0708 12:59:58.694364    3816 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem
	I0708 12:59:58.694388    3816 main.go:141] libmachine: Decoding PEM data...
	I0708 12:59:58.694395    3816 main.go:141] libmachine: Parsing certificate...
	I0708 12:59:58.694760    3816 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19195-1270/.minikube/cache/iso/arm64/minikube-v1.33.1-1720011972-19186-arm64.iso...
	I0708 12:59:58.842843    3816 main.go:141] libmachine: Creating SSH key...
	I0708 12:59:58.948343    3816 main.go:141] libmachine: Creating Disk image...
	I0708 12:59:58.948348    3816 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0708 12:59:58.948523    3816 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/docker-flags-537000/disk.qcow2.raw /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/docker-flags-537000/disk.qcow2
	I0708 12:59:58.957947    3816 main.go:141] libmachine: STDOUT: 
	I0708 12:59:58.957965    3816 main.go:141] libmachine: STDERR: 
	I0708 12:59:58.958008    3816 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/docker-flags-537000/disk.qcow2 +20000M
	I0708 12:59:58.965805    3816 main.go:141] libmachine: STDOUT: Image resized.
	
	I0708 12:59:58.965819    3816 main.go:141] libmachine: STDERR: 
	I0708 12:59:58.965829    3816 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/docker-flags-537000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/docker-flags-537000/disk.qcow2
	I0708 12:59:58.965834    3816 main.go:141] libmachine: Starting QEMU VM...
	I0708 12:59:58.965863    3816 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/docker-flags-537000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/docker-flags-537000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/docker-flags-537000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:12:88:26:7a:16 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/docker-flags-537000/disk.qcow2
	I0708 12:59:58.967461    3816 main.go:141] libmachine: STDOUT: 
	I0708 12:59:58.967478    3816 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 12:59:58.967497    3816 client.go:171] duration metric: took 273.271709ms to LocalClient.Create
	I0708 13:00:00.969617    3816 start.go:128] duration metric: took 2.2973205s to createHost
	I0708 13:00:00.969684    3816 start.go:83] releasing machines lock for "docker-flags-537000", held for 2.297439292s
	W0708 13:00:00.969755    3816 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:00:00.992749    3816 out.go:177] * Deleting "docker-flags-537000" in qemu2 ...
	W0708 13:00:01.014601    3816 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:00:01.014628    3816 start.go:728] Will try again in 5 seconds ...
	I0708 13:00:06.016690    3816 start.go:360] acquireMachinesLock for docker-flags-537000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 13:00:06.017230    3816 start.go:364] duration metric: took 368.167µs to acquireMachinesLock for "docker-flags-537000"
	I0708 13:00:06.017384    3816 start.go:93] Provisioning new machine with config: &{Name:docker-flags-537000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:docker-flags-537000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 13:00:06.017645    3816 start.go:125] createHost starting for "" (driver="qemu2")
	I0708 13:00:06.026486    3816 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0708 13:00:06.077928    3816 start.go:159] libmachine.API.Create for "docker-flags-537000" (driver="qemu2")
	I0708 13:00:06.077975    3816 client.go:168] LocalClient.Create starting
	I0708 13:00:06.078091    3816 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem
	I0708 13:00:06.078146    3816 main.go:141] libmachine: Decoding PEM data...
	I0708 13:00:06.078170    3816 main.go:141] libmachine: Parsing certificate...
	I0708 13:00:06.078235    3816 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem
	I0708 13:00:06.078279    3816 main.go:141] libmachine: Decoding PEM data...
	I0708 13:00:06.078294    3816 main.go:141] libmachine: Parsing certificate...
	I0708 13:00:06.079419    3816 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19195-1270/.minikube/cache/iso/arm64/minikube-v1.33.1-1720011972-19186-arm64.iso...
	I0708 13:00:06.247032    3816 main.go:141] libmachine: Creating SSH key...
	I0708 13:00:06.361106    3816 main.go:141] libmachine: Creating Disk image...
	I0708 13:00:06.361111    3816 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0708 13:00:06.361286    3816 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/docker-flags-537000/disk.qcow2.raw /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/docker-flags-537000/disk.qcow2
	I0708 13:00:06.370327    3816 main.go:141] libmachine: STDOUT: 
	I0708 13:00:06.370353    3816 main.go:141] libmachine: STDERR: 
	I0708 13:00:06.370415    3816 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/docker-flags-537000/disk.qcow2 +20000M
	I0708 13:00:06.378160    3816 main.go:141] libmachine: STDOUT: Image resized.
	
	I0708 13:00:06.378175    3816 main.go:141] libmachine: STDERR: 
	I0708 13:00:06.378197    3816 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/docker-flags-537000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/docker-flags-537000/disk.qcow2
	I0708 13:00:06.378201    3816 main.go:141] libmachine: Starting QEMU VM...
	I0708 13:00:06.378235    3816 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/docker-flags-537000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/docker-flags-537000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/docker-flags-537000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:ad:57:2a:1e:85 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/docker-flags-537000/disk.qcow2
	I0708 13:00:06.379856    3816 main.go:141] libmachine: STDOUT: 
	I0708 13:00:06.379873    3816 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 13:00:06.379885    3816 client.go:171] duration metric: took 301.914292ms to LocalClient.Create
	I0708 13:00:08.382014    3816 start.go:128] duration metric: took 2.364410584s to createHost
	I0708 13:00:08.382090    3816 start.go:83] releasing machines lock for "docker-flags-537000", held for 2.364887125s
	W0708 13:00:08.382495    3816 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-537000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-537000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:00:08.392001    3816 out.go:177] 
	W0708 13:00:08.398226    3816 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0708 13:00:08.398270    3816 out.go:239] * 
	* 
	W0708 13:00:08.400939    3816 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 13:00:08.411133    3816 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-537000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-537000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-537000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (75.938417ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-537000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-537000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-537000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-537000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-537000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-537000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-537000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-537000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-537000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (52.753666ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-537000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-537000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-537000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-537000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-537000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-537000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-07-08 13:00:08.557771 -0700 PDT m=+1908.011737626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-537000 -n docker-flags-537000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-537000 -n docker-flags-537000: exit status 7 (28.51425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-537000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-537000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-537000
--- FAIL: TestDockerFlags (10.10s)

                                                
                                    
x
+
TestForceSystemdFlag (10.38s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-803000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-803000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.188602334s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-803000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-803000" primary control-plane node in "force-systemd-flag-803000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-803000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 12:59:53.152343    3795 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:59:53.152499    3795 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:59:53.152503    3795 out.go:304] Setting ErrFile to fd 2...
	I0708 12:59:53.152505    3795 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:59:53.152777    3795 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:59:53.154199    3795 out.go:298] Setting JSON to false
	I0708 12:59:53.170414    3795 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3561,"bootTime":1720465232,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0708 12:59:53.170493    3795 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0708 12:59:53.176241    3795 out.go:177] * [force-systemd-flag-803000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0708 12:59:53.183056    3795 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 12:59:53.183087    3795 notify.go:220] Checking for updates...
	I0708 12:59:53.192116    3795 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 12:59:53.196178    3795 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0708 12:59:53.199131    3795 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 12:59:53.202141    3795 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	I0708 12:59:53.205109    3795 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 12:59:53.208419    3795 config.go:182] Loaded profile config "force-systemd-env-827000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:59:53.208490    3795 config.go:182] Loaded profile config "multinode-969000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:59:53.208535    3795 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 12:59:53.213140    3795 out.go:177] * Using the qemu2 driver based on user configuration
	I0708 12:59:53.220058    3795 start.go:297] selected driver: qemu2
	I0708 12:59:53.220063    3795 start.go:901] validating driver "qemu2" against <nil>
	I0708 12:59:53.220069    3795 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 12:59:53.222422    3795 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0708 12:59:53.225118    3795 out.go:177] * Automatically selected the socket_vmnet network
	I0708 12:59:53.226516    3795 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0708 12:59:53.226544    3795 cni.go:84] Creating CNI manager for ""
	I0708 12:59:53.226551    3795 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0708 12:59:53.226556    3795 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0708 12:59:53.226585    3795 start.go:340] cluster config:
	{Name:force-systemd-flag-803000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:force-systemd-flag-803000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 12:59:53.230378    3795 iso.go:125] acquiring lock: {Name:mk0270d312faa6a295feea241390baaf586d8510 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 12:59:53.238123    3795 out.go:177] * Starting "force-systemd-flag-803000" primary control-plane node in "force-systemd-flag-803000" cluster
	I0708 12:59:53.242101    3795 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0708 12:59:53.242114    3795 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0708 12:59:53.242121    3795 cache.go:56] Caching tarball of preloaded images
	I0708 12:59:53.242174    3795 preload.go:173] Found /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0708 12:59:53.242179    3795 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0708 12:59:53.242233    3795 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/force-systemd-flag-803000/config.json ...
	I0708 12:59:53.242245    3795 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/force-systemd-flag-803000/config.json: {Name:mk59278535dc143078909acb6455711a1955e978 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:59:53.242469    3795 start.go:360] acquireMachinesLock for force-systemd-flag-803000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 12:59:53.242505    3795 start.go:364] duration metric: took 29.333µs to acquireMachinesLock for "force-systemd-flag-803000"
	I0708 12:59:53.242518    3795 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-803000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.2 ClusterName:force-systemd-flag-803000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 12:59:53.242548    3795 start.go:125] createHost starting for "" (driver="qemu2")
	I0708 12:59:53.251017    3795 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0708 12:59:53.269352    3795 start.go:159] libmachine.API.Create for "force-systemd-flag-803000" (driver="qemu2")
	I0708 12:59:53.269388    3795 client.go:168] LocalClient.Create starting
	I0708 12:59:53.269454    3795 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem
	I0708 12:59:53.269485    3795 main.go:141] libmachine: Decoding PEM data...
	I0708 12:59:53.269494    3795 main.go:141] libmachine: Parsing certificate...
	I0708 12:59:53.269549    3795 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem
	I0708 12:59:53.269574    3795 main.go:141] libmachine: Decoding PEM data...
	I0708 12:59:53.269581    3795 main.go:141] libmachine: Parsing certificate...
	I0708 12:59:53.270001    3795 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19195-1270/.minikube/cache/iso/arm64/minikube-v1.33.1-1720011972-19186-arm64.iso...
	I0708 12:59:53.416852    3795 main.go:141] libmachine: Creating SSH key...
	I0708 12:59:53.642584    3795 main.go:141] libmachine: Creating Disk image...
	I0708 12:59:53.642593    3795 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0708 12:59:53.642929    3795 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/force-systemd-flag-803000/disk.qcow2.raw /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/force-systemd-flag-803000/disk.qcow2
	I0708 12:59:53.652711    3795 main.go:141] libmachine: STDOUT: 
	I0708 12:59:53.652728    3795 main.go:141] libmachine: STDERR: 
	I0708 12:59:53.652771    3795 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/force-systemd-flag-803000/disk.qcow2 +20000M
	I0708 12:59:53.660727    3795 main.go:141] libmachine: STDOUT: Image resized.
	
	I0708 12:59:53.660742    3795 main.go:141] libmachine: STDERR: 
	I0708 12:59:53.660756    3795 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/force-systemd-flag-803000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/force-systemd-flag-803000/disk.qcow2
	I0708 12:59:53.660760    3795 main.go:141] libmachine: Starting QEMU VM...
	I0708 12:59:53.660805    3795 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/force-systemd-flag-803000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/force-systemd-flag-803000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/force-systemd-flag-803000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:68:4c:09:e1:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/force-systemd-flag-803000/disk.qcow2
	I0708 12:59:53.662485    3795 main.go:141] libmachine: STDOUT: 
	I0708 12:59:53.662500    3795 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 12:59:53.662518    3795 client.go:171] duration metric: took 393.137542ms to LocalClient.Create
	I0708 12:59:55.664638    3795 start.go:128] duration metric: took 2.422138583s to createHost
	I0708 12:59:55.664691    3795 start.go:83] releasing machines lock for "force-systemd-flag-803000", held for 2.422244791s
	W0708 12:59:55.664752    3795 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 12:59:55.675965    3795 out.go:177] * Deleting "force-systemd-flag-803000" in qemu2 ...
	W0708 12:59:55.707480    3795 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 12:59:55.707509    3795 start.go:728] Will try again in 5 seconds ...
	I0708 13:00:00.709600    3795 start.go:360] acquireMachinesLock for force-systemd-flag-803000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 13:00:00.969842    3795 start.go:364] duration metric: took 260.120709ms to acquireMachinesLock for "force-systemd-flag-803000"
	I0708 13:00:00.970029    3795 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-803000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.2 ClusterName:force-systemd-flag-803000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 13:00:00.970219    3795 start.go:125] createHost starting for "" (driver="qemu2")
	I0708 13:00:00.981885    3795 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0708 13:00:01.031667    3795 start.go:159] libmachine.API.Create for "force-systemd-flag-803000" (driver="qemu2")
	I0708 13:00:01.031722    3795 client.go:168] LocalClient.Create starting
	I0708 13:00:01.031857    3795 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem
	I0708 13:00:01.031921    3795 main.go:141] libmachine: Decoding PEM data...
	I0708 13:00:01.031935    3795 main.go:141] libmachine: Parsing certificate...
	I0708 13:00:01.031993    3795 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem
	I0708 13:00:01.032041    3795 main.go:141] libmachine: Decoding PEM data...
	I0708 13:00:01.032055    3795 main.go:141] libmachine: Parsing certificate...
	I0708 13:00:01.032760    3795 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19195-1270/.minikube/cache/iso/arm64/minikube-v1.33.1-1720011972-19186-arm64.iso...
	I0708 13:00:01.201400    3795 main.go:141] libmachine: Creating SSH key...
	I0708 13:00:01.238970    3795 main.go:141] libmachine: Creating Disk image...
	I0708 13:00:01.238975    3795 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0708 13:00:01.239169    3795 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/force-systemd-flag-803000/disk.qcow2.raw /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/force-systemd-flag-803000/disk.qcow2
	I0708 13:00:01.248659    3795 main.go:141] libmachine: STDOUT: 
	I0708 13:00:01.248676    3795 main.go:141] libmachine: STDERR: 
	I0708 13:00:01.248721    3795 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/force-systemd-flag-803000/disk.qcow2 +20000M
	I0708 13:00:01.256687    3795 main.go:141] libmachine: STDOUT: Image resized.
	
	I0708 13:00:01.256700    3795 main.go:141] libmachine: STDERR: 
	I0708 13:00:01.256717    3795 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/force-systemd-flag-803000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/force-systemd-flag-803000/disk.qcow2
	I0708 13:00:01.256722    3795 main.go:141] libmachine: Starting QEMU VM...
	I0708 13:00:01.256766    3795 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/force-systemd-flag-803000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/force-systemd-flag-803000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/force-systemd-flag-803000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:50:af:45:14:20 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/force-systemd-flag-803000/disk.qcow2
	I0708 13:00:01.258408    3795 main.go:141] libmachine: STDOUT: 
	I0708 13:00:01.258421    3795 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 13:00:01.258432    3795 client.go:171] duration metric: took 226.711667ms to LocalClient.Create
	I0708 13:00:03.260686    3795 start.go:128] duration metric: took 2.290470041s to createHost
	I0708 13:00:03.260763    3795 start.go:83] releasing machines lock for "force-systemd-flag-803000", held for 2.290952208s
	W0708 13:00:03.261103    3795 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-803000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-803000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:00:03.281736    3795 out.go:177] 
	W0708 13:00:03.285607    3795 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0708 13:00:03.285639    3795 out.go:239] * 
	* 
	W0708 13:00:03.288262    3795 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 13:00:03.298626    3795 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-803000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-803000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-803000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (79.2415ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-803000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-803000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-803000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-07-08 13:00:03.395737 -0700 PDT m=+1902.849556335
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-803000 -n force-systemd-flag-803000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-803000 -n force-systemd-flag-803000: exit status 7 (35.552042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-803000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-803000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-803000
--- FAIL: TestForceSystemdFlag (10.38s)

                                                
                                    
x
+
TestForceSystemdEnv (10.05s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-827000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-827000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.870496583s)

                                                
                                                
-- stdout --
	* [force-systemd-env-827000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-827000" primary control-plane node in "force-systemd-env-827000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-827000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 12:59:48.534614    3776 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:59:48.534727    3776 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:59:48.534730    3776 out.go:304] Setting ErrFile to fd 2...
	I0708 12:59:48.534733    3776 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:59:48.534860    3776 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:59:48.535935    3776 out.go:298] Setting JSON to false
	I0708 12:59:48.553823    3776 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3556,"bootTime":1720465232,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0708 12:59:48.553903    3776 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0708 12:59:48.560039    3776 out.go:177] * [force-systemd-env-827000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0708 12:59:48.568031    3776 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 12:59:48.568117    3776 notify.go:220] Checking for updates...
	I0708 12:59:48.576039    3776 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 12:59:48.578970    3776 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0708 12:59:48.582065    3776 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 12:59:48.585072    3776 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	I0708 12:59:48.587945    3776 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0708 12:59:48.591328    3776 config.go:182] Loaded profile config "multinode-969000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:59:48.591384    3776 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 12:59:48.595953    3776 out.go:177] * Using the qemu2 driver based on user configuration
	I0708 12:59:48.602990    3776 start.go:297] selected driver: qemu2
	I0708 12:59:48.602996    3776 start.go:901] validating driver "qemu2" against <nil>
	I0708 12:59:48.603001    3776 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 12:59:48.605231    3776 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0708 12:59:48.608019    3776 out.go:177] * Automatically selected the socket_vmnet network
	I0708 12:59:48.611074    3776 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0708 12:59:48.611085    3776 cni.go:84] Creating CNI manager for ""
	I0708 12:59:48.611091    3776 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0708 12:59:48.611093    3776 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0708 12:59:48.611116    3776 start.go:340] cluster config:
	{Name:force-systemd-env-827000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:force-systemd-env-827000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 12:59:48.614490    3776 iso.go:125] acquiring lock: {Name:mk0270d312faa6a295feea241390baaf586d8510 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 12:59:48.622057    3776 out.go:177] * Starting "force-systemd-env-827000" primary control-plane node in "force-systemd-env-827000" cluster
	I0708 12:59:48.626008    3776 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0708 12:59:48.626021    3776 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0708 12:59:48.626027    3776 cache.go:56] Caching tarball of preloaded images
	I0708 12:59:48.626078    3776 preload.go:173] Found /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0708 12:59:48.626083    3776 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0708 12:59:48.626129    3776 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/force-systemd-env-827000/config.json ...
	I0708 12:59:48.626139    3776 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/force-systemd-env-827000/config.json: {Name:mk2938e84a34b77f726ea6015c9b3d49db87b6a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:59:48.626390    3776 start.go:360] acquireMachinesLock for force-systemd-env-827000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 12:59:48.626422    3776 start.go:364] duration metric: took 24.084µs to acquireMachinesLock for "force-systemd-env-827000"
	I0708 12:59:48.626432    3776 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-827000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.2 ClusterName:force-systemd-env-827000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 12:59:48.626455    3776 start.go:125] createHost starting for "" (driver="qemu2")
	I0708 12:59:48.633949    3776 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0708 12:59:48.649175    3776 start.go:159] libmachine.API.Create for "force-systemd-env-827000" (driver="qemu2")
	I0708 12:59:48.649206    3776 client.go:168] LocalClient.Create starting
	I0708 12:59:48.649271    3776 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem
	I0708 12:59:48.649302    3776 main.go:141] libmachine: Decoding PEM data...
	I0708 12:59:48.649309    3776 main.go:141] libmachine: Parsing certificate...
	I0708 12:59:48.649349    3776 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem
	I0708 12:59:48.649371    3776 main.go:141] libmachine: Decoding PEM data...
	I0708 12:59:48.649381    3776 main.go:141] libmachine: Parsing certificate...
	I0708 12:59:48.649722    3776 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19195-1270/.minikube/cache/iso/arm64/minikube-v1.33.1-1720011972-19186-arm64.iso...
	I0708 12:59:48.790745    3776 main.go:141] libmachine: Creating SSH key...
	I0708 12:59:48.858782    3776 main.go:141] libmachine: Creating Disk image...
	I0708 12:59:48.858794    3776 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0708 12:59:48.859004    3776 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/force-systemd-env-827000/disk.qcow2.raw /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/force-systemd-env-827000/disk.qcow2
	I0708 12:59:48.868451    3776 main.go:141] libmachine: STDOUT: 
	I0708 12:59:48.868478    3776 main.go:141] libmachine: STDERR: 
	I0708 12:59:48.868534    3776 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/force-systemd-env-827000/disk.qcow2 +20000M
	I0708 12:59:48.876763    3776 main.go:141] libmachine: STDOUT: Image resized.
	
	I0708 12:59:48.876778    3776 main.go:141] libmachine: STDERR: 
	I0708 12:59:48.876790    3776 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/force-systemd-env-827000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/force-systemd-env-827000/disk.qcow2
	I0708 12:59:48.876795    3776 main.go:141] libmachine: Starting QEMU VM...
	I0708 12:59:48.876831    3776 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/force-systemd-env-827000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/force-systemd-env-827000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/force-systemd-env-827000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:8b:dd:d6:d8:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/force-systemd-env-827000/disk.qcow2
	I0708 12:59:48.878474    3776 main.go:141] libmachine: STDOUT: 
	I0708 12:59:48.878490    3776 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 12:59:48.878510    3776 client.go:171] duration metric: took 229.306375ms to LocalClient.Create
	I0708 12:59:50.880664    3776 start.go:128] duration metric: took 2.254243542s to createHost
	I0708 12:59:50.880734    3776 start.go:83] releasing machines lock for "force-systemd-env-827000", held for 2.25436625s
	W0708 12:59:50.880803    3776 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 12:59:50.888088    3776 out.go:177] * Deleting "force-systemd-env-827000" in qemu2 ...
	W0708 12:59:50.916075    3776 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 12:59:50.916112    3776 start.go:728] Will try again in 5 seconds ...
	I0708 12:59:55.918132    3776 start.go:360] acquireMachinesLock for force-systemd-env-827000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 12:59:55.918552    3776 start.go:364] duration metric: took 290.458µs to acquireMachinesLock for "force-systemd-env-827000"
	I0708 12:59:55.918671    3776 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-827000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.2 ClusterName:force-systemd-env-827000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 12:59:55.918956    3776 start.go:125] createHost starting for "" (driver="qemu2")
	I0708 12:59:55.927489    3776 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0708 12:59:55.977932    3776 start.go:159] libmachine.API.Create for "force-systemd-env-827000" (driver="qemu2")
	I0708 12:59:55.977982    3776 client.go:168] LocalClient.Create starting
	I0708 12:59:55.978098    3776 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem
	I0708 12:59:55.978166    3776 main.go:141] libmachine: Decoding PEM data...
	I0708 12:59:55.978180    3776 main.go:141] libmachine: Parsing certificate...
	I0708 12:59:55.978248    3776 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem
	I0708 12:59:55.978295    3776 main.go:141] libmachine: Decoding PEM data...
	I0708 12:59:55.978306    3776 main.go:141] libmachine: Parsing certificate...
	I0708 12:59:55.979410    3776 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19195-1270/.minikube/cache/iso/arm64/minikube-v1.33.1-1720011972-19186-arm64.iso...
	I0708 12:59:56.142240    3776 main.go:141] libmachine: Creating SSH key...
	I0708 12:59:56.312621    3776 main.go:141] libmachine: Creating Disk image...
	I0708 12:59:56.312630    3776 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0708 12:59:56.312825    3776 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/force-systemd-env-827000/disk.qcow2.raw /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/force-systemd-env-827000/disk.qcow2
	I0708 12:59:56.322199    3776 main.go:141] libmachine: STDOUT: 
	I0708 12:59:56.322218    3776 main.go:141] libmachine: STDERR: 
	I0708 12:59:56.322279    3776 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/force-systemd-env-827000/disk.qcow2 +20000M
	I0708 12:59:56.330100    3776 main.go:141] libmachine: STDOUT: Image resized.
	
	I0708 12:59:56.330118    3776 main.go:141] libmachine: STDERR: 
	I0708 12:59:56.330128    3776 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/force-systemd-env-827000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/force-systemd-env-827000/disk.qcow2
	I0708 12:59:56.330132    3776 main.go:141] libmachine: Starting QEMU VM...
	I0708 12:59:56.330170    3776 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/force-systemd-env-827000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/force-systemd-env-827000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/force-systemd-env-827000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:b6:bc:1a:67:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/force-systemd-env-827000/disk.qcow2
	I0708 12:59:56.331854    3776 main.go:141] libmachine: STDOUT: 
	I0708 12:59:56.331872    3776 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 12:59:56.331885    3776 client.go:171] duration metric: took 353.90775ms to LocalClient.Create
	I0708 12:59:58.333988    3776 start.go:128] duration metric: took 2.415075875s to createHost
	I0708 12:59:58.334036    3776 start.go:83] releasing machines lock for "force-systemd-env-827000", held for 2.415528833s
	W0708 12:59:58.334450    3776 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-827000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-827000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 12:59:58.345935    3776 out.go:177] 
	W0708 12:59:58.350013    3776 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0708 12:59:58.350237    3776 out.go:239] * 
	* 
	W0708 12:59:58.352931    3776 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 12:59:58.360993    3776 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-827000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-827000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-827000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (74.370792ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-827000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-827000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-827000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-07-08 12:59:58.452543 -0700 PDT m=+1897.906221168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-827000 -n force-systemd-env-827000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-827000 -n force-systemd-env-827000: exit status 7 (30.926666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-827000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-827000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-827000
--- FAIL: TestForceSystemdEnv (10.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (37.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-183000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-183000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6f49f58cd5-nm82s" [03efa042-10c7-44d9-9a9a-15839a1eef4d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-6f49f58cd5-nm82s" [03efa042-10c7-44d9-9a9a-15839a1eef4d] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.005090833s
functional_test.go:1645: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.105.4:31984
functional_test.go:1657: error fetching http://192.168.105.4:31984: Get "http://192.168.105.4:31984": dial tcp 192.168.105.4:31984: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31984: Get "http://192.168.105.4:31984": dial tcp 192.168.105.4:31984: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31984: Get "http://192.168.105.4:31984": dial tcp 192.168.105.4:31984: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31984: Get "http://192.168.105.4:31984": dial tcp 192.168.105.4:31984: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31984: Get "http://192.168.105.4:31984": dial tcp 192.168.105.4:31984: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31984: Get "http://192.168.105.4:31984": dial tcp 192.168.105.4:31984: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31984: Get "http://192.168.105.4:31984": dial tcp 192.168.105.4:31984: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31984: Get "http://192.168.105.4:31984": dial tcp 192.168.105.4:31984: connect: connection refused
functional_test.go:1677: failed to fetch http://192.168.105.4:31984: Get "http://192.168.105.4:31984": dial tcp 192.168.105.4:31984: connect: connection refused
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-183000 describe po hello-node-connect
functional_test.go:1602: hello-node pod describe:
Name:             hello-node-connect-6f49f58cd5-nm82s
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-183000/192.168.105.4
Start Time:       Mon, 08 Jul 2024 12:37:26 -0700
Labels:           app=hello-node-connect
pod-template-hash=6f49f58cd5
Annotations:      <none>
Status:           Running
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-6f49f58cd5
Containers:
echoserver-arm:
Container ID:   docker://9f176fd6218794b85ad06335e77e282f7eb2041eb177b19d0a9b9184e4c7f344
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Mon, 08 Jul 2024 12:37:46 -0700
Finished:     Mon, 08 Jul 2024 12:37:46 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cpxdw (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-cpxdw:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  36s                default-scheduler  Successfully assigned default/hello-node-connect-6f49f58cd5-nm82s to functional-183000
Normal   Pulling    35s                kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     32s                kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 3.007s (3.007s including waiting). Image size: 84957542 bytes.
Normal   Created    16s (x3 over 32s)  kubelet            Created container echoserver-arm
Normal   Started    16s (x3 over 32s)  kubelet            Started container echoserver-arm
Normal   Pulled     16s (x2 over 32s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Warning  BackOff    5s (x4 over 31s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-6f49f58cd5-nm82s_default(03efa042-10c7-44d9-9a9a-15839a1eef4d)

                                                
                                                
functional_test.go:1604: (dbg) Run:  kubectl --context functional-183000 logs -l app=hello-node-connect
functional_test.go:1608: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1610: (dbg) Run:  kubectl --context functional-183000 describe svc hello-node-connect
functional_test.go:1614: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.106.30.166
IPs:                      10.106.30.166
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31984/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-183000 -n functional-183000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|-----------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                        Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| mount     | -p functional-183000                                                                                                | functional-183000 | jenkins | v1.33.1 | 08 Jul 24 12:37 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port4196396720/001:/mount-9p     |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-183000 ssh findmnt                                                                                       | functional-183000 | jenkins | v1.33.1 | 08 Jul 24 12:37 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-183000 ssh findmnt                                                                                       | functional-183000 | jenkins | v1.33.1 | 08 Jul 24 12:37 PDT | 08 Jul 24 12:37 PDT |
	|           | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-183000 ssh -- ls                                                                                         | functional-183000 | jenkins | v1.33.1 | 08 Jul 24 12:37 PDT | 08 Jul 24 12:37 PDT |
	|           | -la /mount-9p                                                                                                       |                   |         |         |                     |                     |
	| ssh       | functional-183000 ssh cat                                                                                           | functional-183000 | jenkins | v1.33.1 | 08 Jul 24 12:37 PDT | 08 Jul 24 12:37 PDT |
	|           | /mount-9p/test-1720467469201942000                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-183000 ssh stat                                                                                          | functional-183000 | jenkins | v1.33.1 | 08 Jul 24 12:37 PDT | 08 Jul 24 12:37 PDT |
	|           | /mount-9p/created-by-test                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-183000 ssh stat                                                                                          | functional-183000 | jenkins | v1.33.1 | 08 Jul 24 12:37 PDT | 08 Jul 24 12:37 PDT |
	|           | /mount-9p/created-by-pod                                                                                            |                   |         |         |                     |                     |
	| ssh       | functional-183000 ssh sudo                                                                                          | functional-183000 | jenkins | v1.33.1 | 08 Jul 24 12:37 PDT | 08 Jul 24 12:37 PDT |
	|           | umount -f /mount-9p                                                                                                 |                   |         |         |                     |                     |
	| ssh       | functional-183000 ssh findmnt                                                                                       | functional-183000 | jenkins | v1.33.1 | 08 Jul 24 12:37 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| mount     | -p functional-183000                                                                                                | functional-183000 | jenkins | v1.33.1 | 08 Jul 24 12:37 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port385089788/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                 |                   |         |         |                     |                     |
	| ssh       | functional-183000 ssh findmnt                                                                                       | functional-183000 | jenkins | v1.33.1 | 08 Jul 24 12:37 PDT | 08 Jul 24 12:37 PDT |
	|           | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-183000 ssh -- ls                                                                                         | functional-183000 | jenkins | v1.33.1 | 08 Jul 24 12:37 PDT | 08 Jul 24 12:37 PDT |
	|           | -la /mount-9p                                                                                                       |                   |         |         |                     |                     |
	| ssh       | functional-183000 ssh sudo                                                                                          | functional-183000 | jenkins | v1.33.1 | 08 Jul 24 12:37 PDT |                     |
	|           | umount -f /mount-9p                                                                                                 |                   |         |         |                     |                     |
	| mount     | -p functional-183000                                                                                                | functional-183000 | jenkins | v1.33.1 | 08 Jul 24 12:37 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup551917233/001:/mount1   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| mount     | -p functional-183000                                                                                                | functional-183000 | jenkins | v1.33.1 | 08 Jul 24 12:37 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup551917233/001:/mount3   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| mount     | -p functional-183000                                                                                                | functional-183000 | jenkins | v1.33.1 | 08 Jul 24 12:37 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup551917233/001:/mount2   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-183000 ssh findmnt                                                                                       | functional-183000 | jenkins | v1.33.1 | 08 Jul 24 12:37 PDT |                     |
	|           | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-183000 ssh findmnt                                                                                       | functional-183000 | jenkins | v1.33.1 | 08 Jul 24 12:37 PDT | 08 Jul 24 12:37 PDT |
	|           | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-183000 ssh findmnt                                                                                       | functional-183000 | jenkins | v1.33.1 | 08 Jul 24 12:37 PDT | 08 Jul 24 12:37 PDT |
	|           | -T /mount2                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-183000 ssh findmnt                                                                                       | functional-183000 | jenkins | v1.33.1 | 08 Jul 24 12:37 PDT | 08 Jul 24 12:37 PDT |
	|           | -T /mount3                                                                                                          |                   |         |         |                     |                     |
	| mount     | -p functional-183000                                                                                                | functional-183000 | jenkins | v1.33.1 | 08 Jul 24 12:37 PDT |                     |
	|           | --kill=true                                                                                                         |                   |         |         |                     |                     |
	| start     | -p functional-183000                                                                                                | functional-183000 | jenkins | v1.33.1 | 08 Jul 24 12:37 PDT |                     |
	|           | --dry-run --memory                                                                                                  |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                             |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                      |                   |         |         |                     |                     |
	| start     | -p functional-183000                                                                                                | functional-183000 | jenkins | v1.33.1 | 08 Jul 24 12:37 PDT |                     |
	|           | --dry-run --memory                                                                                                  |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                             |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                      |                   |         |         |                     |                     |
	| start     | -p functional-183000 --dry-run                                                                                      | functional-183000 | jenkins | v1.33.1 | 08 Jul 24 12:37 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                      |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                  | functional-183000 | jenkins | v1.33.1 | 08 Jul 24 12:37 PDT |                     |
	|           | -p functional-183000                                                                                                |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	|-----------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/08 12:37:55
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0708 12:37:55.667962    2431 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:37:55.668083    2431 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:37:55.668086    2431 out.go:304] Setting ErrFile to fd 2...
	I0708 12:37:55.668088    2431 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:37:55.668209    2431 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:37:55.669195    2431 out.go:298] Setting JSON to false
	I0708 12:37:55.685550    2431 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2243,"bootTime":1720465232,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0708 12:37:55.685620    2431 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0708 12:37:55.689348    2431 out.go:177] * [functional-183000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0708 12:37:55.696335    2431 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 12:37:55.696372    2431 notify.go:220] Checking for updates...
	I0708 12:37:55.703322    2431 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 12:37:55.706354    2431 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0708 12:37:55.709276    2431 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 12:37:55.712311    2431 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	I0708 12:37:55.715372    2431 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 12:37:55.718521    2431 config.go:182] Loaded profile config "functional-183000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:37:55.718778    2431 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 12:37:55.722339    2431 out.go:177] * Using the qemu2 driver based on existing profile
	I0708 12:37:55.729221    2431 start.go:297] selected driver: qemu2
	I0708 12:37:55.729227    2431 start.go:901] validating driver "qemu2" against &{Name:functional-183000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:functional-183000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 12:37:55.729271    2431 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 12:37:55.731610    2431 cni.go:84] Creating CNI manager for ""
	I0708 12:37:55.731647    2431 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0708 12:37:55.731688    2431 start.go:340] cluster config:
	{Name:functional-183000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-183000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 12:37:55.743294    2431 out.go:177] * dry-run validation complete!
	
	
	==> Docker <==
	Jul 08 19:37:56 functional-183000 dockerd[6023]: time="2024-07-08T19:37:56.654112189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 08 19:37:56 functional-183000 dockerd[6023]: time="2024-07-08T19:37:56.654165856Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 08 19:37:56 functional-183000 dockerd[6023]: time="2024-07-08T19:37:56.654179314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:37:56 functional-183000 dockerd[6023]: time="2024-07-08T19:37:56.654230439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:37:56 functional-183000 dockerd[6023]: time="2024-07-08T19:37:56.664266356Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 08 19:37:56 functional-183000 dockerd[6023]: time="2024-07-08T19:37:56.664309564Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 08 19:37:56 functional-183000 dockerd[6023]: time="2024-07-08T19:37:56.664317356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:37:56 functional-183000 dockerd[6023]: time="2024-07-08T19:37:56.664350523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:37:56 functional-183000 cri-dockerd[6271]: time="2024-07-08T19:37:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f991c022538af475ee49c1ebbec46cc2fe3ca61fe21f068e8bd733482555794d/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 08 19:37:56 functional-183000 cri-dockerd[6271]: time="2024-07-08T19:37:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6e83167f053fe3dbf811c0921ad11a93888c632b39987bc96accc97fa09a8ec0/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 08 19:37:56 functional-183000 dockerd[6017]: time="2024-07-08T19:37:56.963172134Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Jul 08 19:37:57 functional-183000 dockerd[6023]: time="2024-07-08T19:37:57.526190298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 08 19:37:57 functional-183000 dockerd[6023]: time="2024-07-08T19:37:57.526217048Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 08 19:37:57 functional-183000 dockerd[6023]: time="2024-07-08T19:37:57.526225298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:37:57 functional-183000 dockerd[6023]: time="2024-07-08T19:37:57.526253798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:37:57 functional-183000 dockerd[6017]: time="2024-07-08T19:37:57.567404503Z" level=info msg="ignoring event" container=5e30cced0642aa107baab36c88539b724a32bfdbe892ae588b6cadb1c5766983 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 08 19:37:57 functional-183000 dockerd[6023]: time="2024-07-08T19:37:57.567492086Z" level=info msg="shim disconnected" id=5e30cced0642aa107baab36c88539b724a32bfdbe892ae588b6cadb1c5766983 namespace=moby
	Jul 08 19:37:57 functional-183000 dockerd[6023]: time="2024-07-08T19:37:57.567525002Z" level=warning msg="cleaning up after shim disconnected" id=5e30cced0642aa107baab36c88539b724a32bfdbe892ae588b6cadb1c5766983 namespace=moby
	Jul 08 19:37:57 functional-183000 dockerd[6023]: time="2024-07-08T19:37:57.567529169Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 08 19:37:58 functional-183000 cri-dockerd[6271]: time="2024-07-08T19:37:58Z" level=info msg="Stop pulling image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: Status: Downloaded newer image for kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Jul 08 19:37:58 functional-183000 dockerd[6023]: time="2024-07-08T19:37:58.756788163Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 08 19:37:58 functional-183000 dockerd[6023]: time="2024-07-08T19:37:58.756831038Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 08 19:37:58 functional-183000 dockerd[6023]: time="2024-07-08T19:37:58.756838371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:37:58 functional-183000 dockerd[6023]: time="2024-07-08T19:37:58.756868996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:37:58 functional-183000 dockerd[6017]: time="2024-07-08T19:37:58.928863483Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                  CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	9aa453c85f8d6       kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c   5 seconds ago        Running             dashboard-metrics-scraper   0                   f991c022538af       dashboard-metrics-scraper-b5fc48f67-b5h29
	5e30cced0642a       72565bf5bbedf                                                                                          6 seconds ago        Exited              echoserver-arm              2                   1ac7488d82067       hello-node-65f5d5cc78-gvmdh
	5e63ad7e66a49       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e    12 seconds ago       Exited              mount-munger                0                   a63add49f1e08       busybox-mount
	9f176fd621879       72565bf5bbedf                                                                                          17 seconds ago       Exited              echoserver-arm              2                   6d2eaf7c9f861       hello-node-connect-6f49f58cd5-nm82s
	0cf51fb3c39a0       nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df                          27 seconds ago       Running             myfrontend                  0                   933a9432bb8e0       sp-pod
	ba20d7d65007e       nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55                          44 seconds ago       Running             nginx                       0                   ab6111141c0dc       nginx-svc
	15940d9bc01e2       2437cf7621777                                                                                          About a minute ago   Running             coredns                     2                   0ae487105abb6       coredns-7db6d8ff4d-jgfpz
	0e35d1b15d217       ba04bb24b9575                                                                                          About a minute ago   Running             storage-provisioner         2                   887f61167b156       storage-provisioner
	7aec0514c3783       66dbb96a9149f                                                                                          About a minute ago   Running             kube-proxy                  2                   6a8dc38bf2ea3       kube-proxy-l8rr6
	661630f25e107       c7dd04b1bafeb                                                                                          About a minute ago   Running             kube-scheduler              2                   ae451901ca68a       kube-scheduler-functional-183000
	ac214157b2a73       e1dcc3400d3ea                                                                                          About a minute ago   Running             kube-controller-manager     2                   4feac60f1b512       kube-controller-manager-functional-183000
	05697a05b72bc       014faa467e297                                                                                          About a minute ago   Running             etcd                        2                   f933b487a5c79       etcd-functional-183000
	a6db124abb1d2       84c601f3f72c8                                                                                          About a minute ago   Running             kube-apiserver              0                   9d96d0d1a0dd2       kube-apiserver-functional-183000
	6243b54232b08       2437cf7621777                                                                                          About a minute ago   Exited              coredns                     1                   f9e0a51d1d037       coredns-7db6d8ff4d-jgfpz
	d7a61966ad5df       ba04bb24b9575                                                                                          About a minute ago   Exited              storage-provisioner         1                   a9df91ed396c2       storage-provisioner
	34a45b92bc6b1       66dbb96a9149f                                                                                          About a minute ago   Exited              kube-proxy                  1                   7e4c3f59ec1cb       kube-proxy-l8rr6
	df0aed8c1f368       c7dd04b1bafeb                                                                                          About a minute ago   Exited              kube-scheduler              1                   47ef778ae8302       kube-scheduler-functional-183000
	9662a76049242       014faa467e297                                                                                          About a minute ago   Exited              etcd                        1                   e2796d3a2253b       etcd-functional-183000
	b7781fb188e51       e1dcc3400d3ea                                                                                          About a minute ago   Exited              kube-controller-manager     1                   a2092a4c94b54       kube-controller-manager-functional-183000
	
	
	==> coredns [15940d9bc01e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50279 - 64615 "HINFO IN 4098194550810623594.8216680281224779339. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009551975s
	[INFO] 10.244.0.1:1081 - 32296 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.00009625s
	[INFO] 10.244.0.1:18248 - 30448 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000092625s
	[INFO] 10.244.0.1:29613 - 14092 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000031375s
	[INFO] 10.244.0.1:33613 - 56896 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001045041s
	[INFO] 10.244.0.1:40109 - 31960 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000070333s
	[INFO] 10.244.0.1:41701 - 47778 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000024875s
	
	
	==> coredns [6243b54232b0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57270 - 6743 "HINFO IN 4013316273907390663.1072590719924977148. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009356989s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-183000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-183000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad
	                    minikube.k8s.io/name=functional-183000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_08T12_35_33_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jul 2024 19:35:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-183000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jul 2024 19:37:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jul 2024 19:37:54 +0000   Mon, 08 Jul 2024 19:35:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jul 2024 19:37:54 +0000   Mon, 08 Jul 2024 19:35:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jul 2024 19:37:54 +0000   Mon, 08 Jul 2024 19:35:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jul 2024 19:37:54 +0000   Mon, 08 Jul 2024 19:35:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-183000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904748Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904748Ki
	  pods:               110
	System Info:
	  Machine ID:                 eedc43054b8342339912273536bdbd44
	  System UUID:                eedc43054b8342339912273536bdbd44
	  Boot ID:                    2ac968c8-2606-4790-b413-771a4b63c50b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-65f5d5cc78-gvmdh                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21s
	  default                     hello-node-connect-6f49f58cd5-nm82s          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         37s
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         47s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29s
	  kube-system                 coredns-7db6d8ff4d-jgfpz                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m16s
	  kube-system                 etcd-functional-183000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m31s
	  kube-system                 kube-apiserver-functional-183000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 kube-controller-manager-functional-183000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  kube-system                 kube-proxy-l8rr6                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m17s
	  kube-system                 kube-scheduler-functional-183000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m15s
	  kubernetes-dashboard        dashboard-metrics-scraper-b5fc48f67-b5h29    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kubernetes-dashboard        kubernetes-dashboard-779776cb65-t48kf        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m16s                  kube-proxy       
	  Normal  Starting                 70s                    kube-proxy       
	  Normal  Starting                 114s                   kube-proxy       
	  Normal  NodeHasNoDiskPressure    2m31s (x2 over 2m31s)  kubelet          Node functional-183000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m31s (x2 over 2m31s)  kubelet          Node functional-183000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m31s (x2 over 2m31s)  kubelet          Node functional-183000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m31s                  kubelet          Starting kubelet.
	  Normal  NodeReady                2m27s                  kubelet          Node functional-183000 status is now: NodeReady
	  Normal  RegisteredNode           2m18s                  node-controller  Node functional-183000 event: Registered Node functional-183000 in Controller
	  Normal  NodeHasNoDiskPressure    118s (x8 over 118s)    kubelet          Node functional-183000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  118s (x8 over 118s)    kubelet          Node functional-183000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 118s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     118s (x7 over 118s)    kubelet          Node functional-183000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  118s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           102s                   node-controller  Node functional-183000 event: Registered Node functional-183000 in Controller
	  Normal  Starting                 74s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  74s (x8 over 74s)      kubelet          Node functional-183000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    74s (x8 over 74s)      kubelet          Node functional-183000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     74s (x7 over 74s)      kubelet          Node functional-183000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  74s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           59s                    node-controller  Node functional-183000 event: Registered Node functional-183000 in Controller
	
	
	==> dmesg <==
	[  +3.403959] kauditd_printk_skb: 199 callbacks suppressed
	[ +12.491643] kauditd_printk_skb: 31 callbacks suppressed
	[  +4.099377] systemd-fstab-generator[5109]: Ignoring "noauto" option for root device
	[  +9.279753] systemd-fstab-generator[5541]: Ignoring "noauto" option for root device
	[  +0.055072] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.116699] systemd-fstab-generator[5574]: Ignoring "noauto" option for root device
	[  +0.113558] systemd-fstab-generator[5586]: Ignoring "noauto" option for root device
	[  +0.123221] systemd-fstab-generator[5600]: Ignoring "noauto" option for root device
	[  +5.093408] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.318159] systemd-fstab-generator[6224]: Ignoring "noauto" option for root device
	[  +0.093053] systemd-fstab-generator[6236]: Ignoring "noauto" option for root device
	[  +0.088199] systemd-fstab-generator[6248]: Ignoring "noauto" option for root device
	[  +0.103770] systemd-fstab-generator[6263]: Ignoring "noauto" option for root device
	[  +0.220665] systemd-fstab-generator[6426]: Ignoring "noauto" option for root device
	[  +1.139751] systemd-fstab-generator[6548]: Ignoring "noauto" option for root device
	[  +3.414699] kauditd_printk_skb: 199 callbacks suppressed
	[Jul 8 19:37] kauditd_printk_skb: 33 callbacks suppressed
	[  +2.551767] systemd-fstab-generator[7553]: Ignoring "noauto" option for root device
	[  +4.797430] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.130569] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.950595] kauditd_printk_skb: 13 callbacks suppressed
	[  +7.722365] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.616524] kauditd_printk_skb: 24 callbacks suppressed
	[ +14.371036] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.880526] kauditd_printk_skb: 15 callbacks suppressed
	
	
	==> etcd [05697a05b72b] <==
	{"level":"info","ts":"2024-07-08T19:36:50.213557Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-08T19:36:50.218485Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-08T19:36:50.218519Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-08T19:36:50.213687Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2024-07-08T19:36:50.218634Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-07-08T19:36:50.218709Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T19:36:50.218743Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T19:36:50.213502Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-08T19:36:50.220142Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-08T19:36:50.220599Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-08T19:36:50.220653Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-08T19:36:51.597062Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-07-08T19:36:51.59713Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-07-08T19:36:51.597151Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-07-08T19:36:51.597169Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-07-08T19:36:51.597197Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-07-08T19:36:51.597212Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-07-08T19:36:51.597224Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-07-08T19:36:51.598304Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-183000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-08T19:36:51.59834Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-08T19:36:51.598906Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-08T19:36:51.600951Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-07-08T19:36:51.603284Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-08T19:36:51.611883Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-08T19:36:51.611918Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [9662a7604924] <==
	{"level":"info","ts":"2024-07-08T19:36:06.492688Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-08T19:36:08.149397Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-08T19:36:08.150232Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-08T19:36:08.150894Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-07-08T19:36:08.15097Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-07-08T19:36:08.150991Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-07-08T19:36:08.151037Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-07-08T19:36:08.15106Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-07-08T19:36:08.15573Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-183000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-08T19:36:08.155871Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-08T19:36:08.156Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-08T19:36:08.156054Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-08T19:36:08.156091Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-08T19:36:08.161228Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-07-08T19:36:08.162968Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-08T19:36:35.4247Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-08T19:36:35.424724Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-183000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-07-08T19:36:35.424776Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-08T19:36:35.424815Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-08T19:36:35.429449Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-08T19:36:35.429466Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-08T19:36:35.431528Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-07-08T19:36:35.435423Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-08T19:36:35.435492Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-08T19:36:35.435498Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-183000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> kernel <==
	 19:38:03 up 2 min,  0 users,  load average: 0.90, 0.52, 0.21
	Linux functional-183000 5.10.207 #1 SMP PREEMPT Wed Jul 3 15:00:24 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a6db124abb1d] <==
	I0708 19:36:52.244361       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0708 19:36:52.244373       1 aggregator.go:165] initial CRD sync complete...
	I0708 19:36:52.244376       1 autoregister_controller.go:141] Starting autoregister controller
	I0708 19:36:52.244390       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0708 19:36:52.244392       1 cache.go:39] Caches are synced for autoregister controller
	I0708 19:36:52.246611       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0708 19:36:52.246620       1 policy_source.go:224] refreshing policies
	I0708 19:36:52.269230       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0708 19:36:53.125025       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0708 19:36:53.228906       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.105.4]
	I0708 19:36:53.229419       1 controller.go:615] quota admission added evaluator for: endpoints
	I0708 19:36:53.231016       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0708 19:36:53.573213       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0708 19:36:53.576903       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0708 19:36:53.592790       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0708 19:36:53.600586       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0708 19:36:53.602691       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0708 19:37:11.631580       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.101.229.72"}
	I0708 19:37:16.413576       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.104.184.105"}
	I0708 19:37:26.771627       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0708 19:37:26.818162       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.106.30.166"}
	I0708 19:37:42.042457       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.109.145.29"}
	I0708 19:37:56.236518       1 controller.go:615] quota admission added evaluator for: namespaces
	I0708 19:37:56.294209       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.163.226"}
	I0708 19:37:56.306982       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.251.141"}
	
	
	==> kube-controller-manager [ac214157b2a7] <==
	I0708 19:37:56.264246       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="9.609419ms"
	E0708 19:37:56.264266       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0708 19:37:56.268831       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="6.717473ms"
	E0708 19:37:56.269128       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0708 19:37:56.269958       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="5.683976ms"
	E0708 19:37:56.270007       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0708 19:37:56.272629       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="3.462319ms"
	E0708 19:37:56.272713       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0708 19:37:56.274980       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="4.792189ms"
	E0708 19:37:56.274997       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0708 19:37:56.278568       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="3.444819ms"
	E0708 19:37:56.278586       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0708 19:37:56.305854       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="10.660665ms"
	I0708 19:37:56.325101       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="26.406142ms"
	I0708 19:37:56.325330       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="8.071967ms"
	I0708 19:37:56.325385       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="10µs"
	I0708 19:37:56.329778       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="28.833µs"
	I0708 19:37:56.331446       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="6.163975ms"
	I0708 19:37:56.331504       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="39.584µs"
	I0708 19:37:56.358053       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="26.541µs"
	I0708 19:37:57.508573       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="556.746µs"
	I0708 19:37:57.514545       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="22.916µs"
	I0708 19:37:57.989014       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="24.917µs"
	I0708 19:37:58.999541       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="3.408979ms"
	I0708 19:37:58.999603       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="37.75µs"
	
	
	==> kube-controller-manager [b7781fb188e5] <==
	I0708 19:36:21.337335       1 shared_informer.go:320] Caches are synced for stateful set
	I0708 19:36:21.341550       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0708 19:36:21.341562       1 shared_informer.go:320] Caches are synced for service account
	I0708 19:36:21.341605       1 shared_informer.go:320] Caches are synced for disruption
	I0708 19:36:21.341635       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0708 19:36:21.343165       1 shared_informer.go:320] Caches are synced for node
	I0708 19:36:21.343207       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0708 19:36:21.343243       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0708 19:36:21.343273       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0708 19:36:21.343291       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0708 19:36:21.345669       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0708 19:36:21.345712       1 shared_informer.go:320] Caches are synced for deployment
	I0708 19:36:21.345712       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="19.958µs"
	I0708 19:36:21.347899       1 shared_informer.go:320] Caches are synced for HPA
	I0708 19:36:21.350084       1 shared_informer.go:320] Caches are synced for cronjob
	I0708 19:36:21.351173       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0708 19:36:21.527831       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0708 19:36:21.530001       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0708 19:36:21.537001       1 shared_informer.go:320] Caches are synced for resource quota
	I0708 19:36:21.541310       1 shared_informer.go:320] Caches are synced for attach detach
	I0708 19:36:21.542972       1 shared_informer.go:320] Caches are synced for resource quota
	I0708 19:36:21.544256       1 shared_informer.go:320] Caches are synced for endpoint
	I0708 19:36:21.952064       1 shared_informer.go:320] Caches are synced for garbage collector
	I0708 19:36:21.970227       1 shared_informer.go:320] Caches are synced for garbage collector
	I0708 19:36:21.970240       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [34a45b92bc6b] <==
	I0708 19:36:09.272952       1 server_linux.go:69] "Using iptables proxy"
	I0708 19:36:09.276289       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	I0708 19:36:09.283526       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0708 19:36:09.283558       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0708 19:36:09.283565       1 server_linux.go:165] "Using iptables Proxier"
	I0708 19:36:09.284209       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0708 19:36:09.284286       1 server.go:872] "Version info" version="v1.30.2"
	I0708 19:36:09.284295       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0708 19:36:09.284708       1 config.go:192] "Starting service config controller"
	I0708 19:36:09.284718       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0708 19:36:09.284726       1 config.go:101] "Starting endpoint slice config controller"
	I0708 19:36:09.284728       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0708 19:36:09.285001       1 config.go:319] "Starting node config controller"
	I0708 19:36:09.285456       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0708 19:36:09.385330       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0708 19:36:09.385332       1 shared_informer.go:320] Caches are synced for service config
	I0708 19:36:09.385602       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [7aec0514c378] <==
	I0708 19:36:53.064658       1 server_linux.go:69] "Using iptables proxy"
	I0708 19:36:53.068244       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	I0708 19:36:53.081145       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0708 19:36:53.081160       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0708 19:36:53.081167       1 server_linux.go:165] "Using iptables Proxier"
	I0708 19:36:53.081860       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0708 19:36:53.081935       1 server.go:872] "Version info" version="v1.30.2"
	I0708 19:36:53.081943       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0708 19:36:53.082290       1 config.go:192] "Starting service config controller"
	I0708 19:36:53.082296       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0708 19:36:53.082305       1 config.go:101] "Starting endpoint slice config controller"
	I0708 19:36:53.082307       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0708 19:36:53.082527       1 config.go:319] "Starting node config controller"
	I0708 19:36:53.082529       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0708 19:36:53.183235       1 shared_informer.go:320] Caches are synced for service config
	I0708 19:36:53.183234       1 shared_informer.go:320] Caches are synced for node config
	I0708 19:36:53.183245       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [661630f25e10] <==
	I0708 19:36:50.810244       1 serving.go:380] Generated self-signed cert in-memory
	W0708 19:36:52.151654       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0708 19:36:52.151672       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0708 19:36:52.151678       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0708 19:36:52.151681       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0708 19:36:52.170681       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0708 19:36:52.170719       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0708 19:36:52.171555       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0708 19:36:52.171614       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0708 19:36:52.171629       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0708 19:36:52.171652       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0708 19:36:52.272032       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [df0aed8c1f36] <==
	I0708 19:36:06.836280       1 serving.go:380] Generated self-signed cert in-memory
	W0708 19:36:08.675857       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0708 19:36:08.675876       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0708 19:36:08.675881       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0708 19:36:08.675885       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0708 19:36:08.719122       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0708 19:36:08.719136       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0708 19:36:08.719926       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0708 19:36:08.719972       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0708 19:36:08.719980       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0708 19:36:08.720074       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0708 19:36:08.822527       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0708 19:36:35.404891       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0708 19:36:35.404936       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0708 19:36:35.404980       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 08 19:37:49 functional-183000 kubelet[6555]: I0708 19:37:49.584588    6555 scope.go:117] "RemoveContainer" containerID="bec554641c38935e1c9d2af7d6469ea51dbf19f21e16e471814d9f5cf3c0759e"
	Jul 08 19:37:50 functional-183000 kubelet[6555]: I0708 19:37:50.088004    6555 topology_manager.go:215] "Topology Admit Handler" podUID="aeffb938-1400-4c5d-927d-0db6653abfde" podNamespace="default" podName="busybox-mount"
	Jul 08 19:37:50 functional-183000 kubelet[6555]: I0708 19:37:50.242985    6555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/aeffb938-1400-4c5d-927d-0db6653abfde-test-volume\") pod \"busybox-mount\" (UID: \"aeffb938-1400-4c5d-927d-0db6653abfde\") " pod="default/busybox-mount"
	Jul 08 19:37:50 functional-183000 kubelet[6555]: I0708 19:37:50.243012    6555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmtq4\" (UniqueName: \"kubernetes.io/projected/aeffb938-1400-4c5d-927d-0db6653abfde-kube-api-access-xmtq4\") pod \"busybox-mount\" (UID: \"aeffb938-1400-4c5d-927d-0db6653abfde\") " pod="default/busybox-mount"
	Jul 08 19:37:53 functional-183000 kubelet[6555]: I0708 19:37:53.161861    6555 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aeffb938-1400-4c5d-927d-0db6653abfde-test-volume" (OuterVolumeSpecName: "test-volume") pod "aeffb938-1400-4c5d-927d-0db6653abfde" (UID: "aeffb938-1400-4c5d-927d-0db6653abfde"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jul 08 19:37:53 functional-183000 kubelet[6555]: I0708 19:37:53.161808    6555 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/aeffb938-1400-4c5d-927d-0db6653abfde-test-volume\") pod \"aeffb938-1400-4c5d-927d-0db6653abfde\" (UID: \"aeffb938-1400-4c5d-927d-0db6653abfde\") "
	Jul 08 19:37:53 functional-183000 kubelet[6555]: I0708 19:37:53.161907    6555 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmtq4\" (UniqueName: \"kubernetes.io/projected/aeffb938-1400-4c5d-927d-0db6653abfde-kube-api-access-xmtq4\") pod \"aeffb938-1400-4c5d-927d-0db6653abfde\" (UID: \"aeffb938-1400-4c5d-927d-0db6653abfde\") "
	Jul 08 19:37:53 functional-183000 kubelet[6555]: I0708 19:37:53.161928    6555 reconciler_common.go:289] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/aeffb938-1400-4c5d-927d-0db6653abfde-test-volume\") on node \"functional-183000\" DevicePath \"\""
	Jul 08 19:37:53 functional-183000 kubelet[6555]: I0708 19:37:53.164662    6555 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aeffb938-1400-4c5d-927d-0db6653abfde-kube-api-access-xmtq4" (OuterVolumeSpecName: "kube-api-access-xmtq4") pod "aeffb938-1400-4c5d-927d-0db6653abfde" (UID: "aeffb938-1400-4c5d-927d-0db6653abfde"). InnerVolumeSpecName "kube-api-access-xmtq4". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 08 19:37:53 functional-183000 kubelet[6555]: I0708 19:37:53.263275    6555 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-xmtq4\" (UniqueName: \"kubernetes.io/projected/aeffb938-1400-4c5d-927d-0db6653abfde-kube-api-access-xmtq4\") on node \"functional-183000\" DevicePath \"\""
	Jul 08 19:37:53 functional-183000 kubelet[6555]: I0708 19:37:53.954817    6555 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a63add49f1e0830d6d570a5e9cfa6e4b766b13159184b641fb9dc13762ad128c"
	Jul 08 19:37:56 functional-183000 kubelet[6555]: I0708 19:37:56.307596    6555 topology_manager.go:215] "Topology Admit Handler" podUID="c659e88d-7095-4549-b994-8335cb87a390" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-b5fc48f67-b5h29"
	Jul 08 19:37:56 functional-183000 kubelet[6555]: E0708 19:37:56.307630    6555 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="aeffb938-1400-4c5d-927d-0db6653abfde" containerName="mount-munger"
	Jul 08 19:37:56 functional-183000 kubelet[6555]: I0708 19:37:56.307645    6555 memory_manager.go:354] "RemoveStaleState removing state" podUID="aeffb938-1400-4c5d-927d-0db6653abfde" containerName="mount-munger"
	Jul 08 19:37:56 functional-183000 kubelet[6555]: I0708 19:37:56.322250    6555 topology_manager.go:215] "Topology Admit Handler" podUID="f59ccd24-4234-447a-a949-3084ed2ea6ff" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-779776cb65-t48kf"
	Jul 08 19:37:56 functional-183000 kubelet[6555]: I0708 19:37:56.477892    6555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c659e88d-7095-4549-b994-8335cb87a390-tmp-volume\") pod \"dashboard-metrics-scraper-b5fc48f67-b5h29\" (UID: \"c659e88d-7095-4549-b994-8335cb87a390\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-b5h29"
	Jul 08 19:37:56 functional-183000 kubelet[6555]: I0708 19:37:56.477914    6555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb9bn\" (UniqueName: \"kubernetes.io/projected/f59ccd24-4234-447a-a949-3084ed2ea6ff-kube-api-access-cb9bn\") pod \"kubernetes-dashboard-779776cb65-t48kf\" (UID: \"f59ccd24-4234-447a-a949-3084ed2ea6ff\") " pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-t48kf"
	Jul 08 19:37:56 functional-183000 kubelet[6555]: I0708 19:37:56.477925    6555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-875k5\" (UniqueName: \"kubernetes.io/projected/c659e88d-7095-4549-b994-8335cb87a390-kube-api-access-875k5\") pod \"dashboard-metrics-scraper-b5fc48f67-b5h29\" (UID: \"c659e88d-7095-4549-b994-8335cb87a390\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-b5h29"
	Jul 08 19:37:56 functional-183000 kubelet[6555]: I0708 19:37:56.477933    6555 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f59ccd24-4234-447a-a949-3084ed2ea6ff-tmp-volume\") pod \"kubernetes-dashboard-779776cb65-t48kf\" (UID: \"f59ccd24-4234-447a-a949-3084ed2ea6ff\") " pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-t48kf"
	Jul 08 19:37:57 functional-183000 kubelet[6555]: I0708 19:37:57.499347    6555 scope.go:117] "RemoveContainer" containerID="fe290da97f1b3a93ec6750b4f6a4e0f5361193a509567fec42a5a6b95f5933ac"
	Jul 08 19:37:57 functional-183000 kubelet[6555]: I0708 19:37:57.499809    6555 scope.go:117] "RemoveContainer" containerID="9f176fd6218794b85ad06335e77e282f7eb2041eb177b19d0a9b9184e4c7f344"
	Jul 08 19:37:57 functional-183000 kubelet[6555]: E0708 19:37:57.499885    6555 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-6f49f58cd5-nm82s_default(03efa042-10c7-44d9-9a9a-15839a1eef4d)\"" pod="default/hello-node-connect-6f49f58cd5-nm82s" podUID="03efa042-10c7-44d9-9a9a-15839a1eef4d"
	Jul 08 19:37:57 functional-183000 kubelet[6555]: I0708 19:37:57.983423    6555 scope.go:117] "RemoveContainer" containerID="fe290da97f1b3a93ec6750b4f6a4e0f5361193a509567fec42a5a6b95f5933ac"
	Jul 08 19:37:57 functional-183000 kubelet[6555]: I0708 19:37:57.983533    6555 scope.go:117] "RemoveContainer" containerID="5e30cced0642aa107baab36c88539b724a32bfdbe892ae588b6cadb1c5766983"
	Jul 08 19:37:57 functional-183000 kubelet[6555]: E0708 19:37:57.983619    6555 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-65f5d5cc78-gvmdh_default(1dc619cd-7132-4565-9cbb-24424367782c)\"" pod="default/hello-node-65f5d5cc78-gvmdh" podUID="1dc619cd-7132-4565-9cbb-24424367782c"
	
	
	==> storage-provisioner [0e35d1b15d21] <==
	I0708 19:36:53.047040       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0708 19:36:53.050997       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0708 19:36:53.051208       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0708 19:37:10.442056       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0708 19:37:10.442243       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"38eb60a2-4237-4bd6-9bf6-413650ffed2f", APIVersion:"v1", ResourceVersion:"592", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-183000_99d95355-7421-4510-b201-6434e6a740a8 became leader
	I0708 19:37:10.442258       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-183000_99d95355-7421-4510-b201-6434e6a740a8!
	I0708 19:37:10.543287       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-183000_99d95355-7421-4510-b201-6434e6a740a8!
	I0708 19:37:22.263589       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0708 19:37:22.263673       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    e5e04f9e-b45d-4e8d-8873-4c48599f3ad6 363 0 2024-07-08 19:35:47 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-07-08 19:35:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-3397333d-a41b-4457-a108-1dd2fad090ef &PersistentVolumeClaim{ObjectMeta:{myclaim  default  3397333d-a41b-4457-a108-1dd2fad090ef 650 0 2024-07-08 19:37:22 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-07-08 19:37:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-07-08 19:37:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0708 19:37:22.264254       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"3397333d-a41b-4457-a108-1dd2fad090ef", APIVersion:"v1", ResourceVersion:"650", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0708 19:37:22.264428       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-3397333d-a41b-4457-a108-1dd2fad090ef" provisioned
	I0708 19:37:22.265152       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0708 19:37:22.265173       1 volume_store.go:212] Trying to save persistentvolume "pvc-3397333d-a41b-4457-a108-1dd2fad090ef"
	I0708 19:37:22.270472       1 volume_store.go:219] persistentvolume "pvc-3397333d-a41b-4457-a108-1dd2fad090ef" saved
	I0708 19:37:22.271265       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"3397333d-a41b-4457-a108-1dd2fad090ef", APIVersion:"v1", ResourceVersion:"650", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-3397333d-a41b-4457-a108-1dd2fad090ef
	
	
	==> storage-provisioner [d7a61966ad5d] <==
	I0708 19:36:09.259554       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0708 19:36:09.270886       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0708 19:36:09.270906       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0708 19:36:26.657074       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0708 19:36:26.657352       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"38eb60a2-4237-4bd6-9bf6-413650ffed2f", APIVersion:"v1", ResourceVersion:"489", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-183000_f20ded0c-bd4d-4fc6-90c8-7b7c087beac5 became leader
	I0708 19:36:26.657409       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-183000_f20ded0c-bd4d-4fc6-90c8-7b7c087beac5!
	I0708 19:36:26.758277       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-183000_f20ded0c-bd4d-4fc6-90c8-7b7c087beac5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-183000 -n functional-183000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-183000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount kubernetes-dashboard-779776cb65-t48kf
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-183000 describe pod busybox-mount kubernetes-dashboard-779776cb65-t48kf
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-183000 describe pod busybox-mount kubernetes-dashboard-779776cb65-t48kf: exit status 1 (40.606125ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-183000/192.168.105.4
	Start Time:       Mon, 08 Jul 2024 12:37:50 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  docker://5e63ad7e66a498f93a770e5d1893bfd4bfe2aeb76757817c6878042948826e97
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 08 Jul 2024 12:37:51 -0700
	      Finished:     Mon, 08 Jul 2024 12:37:51 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xmtq4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-xmtq4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  13s   default-scheduler  Successfully assigned default/busybox-mount to functional-183000
	  Normal  Pulling    13s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     12s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.132s (1.132s including waiting). Image size: 3547125 bytes.
	  Normal  Created    12s   kubelet            Created container mount-munger
	  Normal  Started    12s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "kubernetes-dashboard-779776cb65-t48kf" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-183000 describe pod busybox-mount kubernetes-dashboard-779776cb65-t48kf: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (37.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (79.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-881000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0708 12:38:35.933743    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/addons-443000/client.crt: no such file or directory
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-881000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 90 (1m18.90336575s)

                                                
                                                
-- stdout --
	* [ha-881000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-881000" primary control-plane node in "ha-881000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 12:38:15.864041    2608 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:38:15.864174    2608 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:38:15.864177    2608 out.go:304] Setting ErrFile to fd 2...
	I0708 12:38:15.864180    2608 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:38:15.864310    2608 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:38:15.865396    2608 out.go:298] Setting JSON to false
	I0708 12:38:15.882822    2608 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2263,"bootTime":1720465232,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0708 12:38:15.882893    2608 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0708 12:38:15.886757    2608 out.go:177] * [ha-881000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0708 12:38:15.893895    2608 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 12:38:15.893935    2608 notify.go:220] Checking for updates...
	I0708 12:38:15.901800    2608 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 12:38:15.904798    2608 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0708 12:38:15.907793    2608 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 12:38:15.910818    2608 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	I0708 12:38:15.915710    2608 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 12:38:15.922023    2608 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 12:38:15.925784    2608 out.go:177] * Using the qemu2 driver based on user configuration
	I0708 12:38:15.932778    2608 start.go:297] selected driver: qemu2
	I0708 12:38:15.932784    2608 start.go:901] validating driver "qemu2" against <nil>
	I0708 12:38:15.932790    2608 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 12:38:15.935619    2608 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0708 12:38:15.938777    2608 out.go:177] * Automatically selected the socket_vmnet network
	I0708 12:38:15.941874    2608 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 12:38:15.941911    2608 cni.go:84] Creating CNI manager for ""
	I0708 12:38:15.941917    2608 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0708 12:38:15.941921    2608 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0708 12:38:15.941974    2608 start.go:340] cluster config:
	{Name:ha-881000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-881000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 12:38:15.946266    2608 iso.go:125] acquiring lock: {Name:mk0270d312faa6a295feea241390baaf586d8510 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 12:38:15.951758    2608 out.go:177] * Starting "ha-881000" primary control-plane node in "ha-881000" cluster
	I0708 12:38:15.955814    2608 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0708 12:38:15.955831    2608 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0708 12:38:15.955842    2608 cache.go:56] Caching tarball of preloaded images
	I0708 12:38:15.955892    2608 preload.go:173] Found /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0708 12:38:15.955897    2608 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0708 12:38:15.956076    2608 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/config.json ...
	I0708 12:38:15.956090    2608 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/config.json: {Name:mkffc16870a482fa95e7ef1a0194c1ffb496c243 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:38:15.956358    2608 start.go:360] acquireMachinesLock for ha-881000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 12:38:15.956391    2608 start.go:364] duration metric: took 27.084µs to acquireMachinesLock for "ha-881000"
	I0708 12:38:15.956402    2608 start.go:93] Provisioning new machine with config: &{Name:ha-881000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.2 ClusterName:ha-881000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 12:38:15.956458    2608 start.go:125] createHost starting for "" (driver="qemu2")
	I0708 12:38:15.960820    2608 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0708 12:38:15.984950    2608 start.go:159] libmachine.API.Create for "ha-881000" (driver="qemu2")
	I0708 12:38:15.984974    2608 client.go:168] LocalClient.Create starting
	I0708 12:38:15.985043    2608 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem
	I0708 12:38:15.985077    2608 main.go:141] libmachine: Decoding PEM data...
	I0708 12:38:15.985087    2608 main.go:141] libmachine: Parsing certificate...
	I0708 12:38:15.985121    2608 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem
	I0708 12:38:15.985144    2608 main.go:141] libmachine: Decoding PEM data...
	I0708 12:38:15.985154    2608 main.go:141] libmachine: Parsing certificate...
	I0708 12:38:15.985485    2608 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19195-1270/.minikube/cache/iso/arm64/minikube-v1.33.1-1720011972-19186-arm64.iso...
	I0708 12:38:16.129056    2608 main.go:141] libmachine: Creating SSH key...
	I0708 12:38:16.244235    2608 main.go:141] libmachine: Creating Disk image...
	I0708 12:38:16.244241    2608 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0708 12:38:16.244424    2608 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/disk.qcow2.raw /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/disk.qcow2
	I0708 12:38:16.253831    2608 main.go:141] libmachine: STDOUT: 
	I0708 12:38:16.253848    2608 main.go:141] libmachine: STDERR: 
	I0708 12:38:16.253890    2608 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/disk.qcow2 +20000M
	I0708 12:38:16.261702    2608 main.go:141] libmachine: STDOUT: Image resized.
	
	I0708 12:38:16.261714    2608 main.go:141] libmachine: STDERR: 
	I0708 12:38:16.261730    2608 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/disk.qcow2
	I0708 12:38:16.261735    2608 main.go:141] libmachine: Starting QEMU VM...
	I0708 12:38:16.261762    2608 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:75:66:b4:8a:80 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/disk.qcow2
	I0708 12:38:16.298665    2608 main.go:141] libmachine: STDOUT: 
	I0708 12:38:16.298697    2608 main.go:141] libmachine: STDERR: 
	I0708 12:38:16.298700    2608 main.go:141] libmachine: Attempt 0
	I0708 12:38:16.298718    2608 main.go:141] libmachine: Searching for de:75:66:b4:8a:80 in /var/db/dhcpd_leases ...
	I0708 12:38:16.298774    2608 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0708 12:38:16.298793    2608 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:2a:1:f5:fb:91:b7 ID:1,2a:1:f5:fb:91:b7 Lease:0x668d90ef}
	I0708 12:38:16.298798    2608 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:82:c3:fb:64:cc:2e ID:1,82:c3:fb:64:cc:2e Lease:0x668c3f2e}
	I0708 12:38:16.298804    2608 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:f2:6f:8d:44:21:17 ID:1,f2:6f:8d:44:21:17 Lease:0x668c3efb}
	I0708 12:38:16.298812    2608 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x668d8f48}
	I0708 12:38:18.300973    2608 main.go:141] libmachine: Attempt 1
	I0708 12:38:18.301049    2608 main.go:141] libmachine: Searching for de:75:66:b4:8a:80 in /var/db/dhcpd_leases ...
	I0708 12:38:18.301382    2608 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0708 12:38:18.301435    2608 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:2a:1:f5:fb:91:b7 ID:1,2a:1:f5:fb:91:b7 Lease:0x668d90ef}
	I0708 12:38:18.301469    2608 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:82:c3:fb:64:cc:2e ID:1,82:c3:fb:64:cc:2e Lease:0x668c3f2e}
	I0708 12:38:18.301499    2608 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:f2:6f:8d:44:21:17 ID:1,f2:6f:8d:44:21:17 Lease:0x668c3efb}
	I0708 12:38:18.301527    2608 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x668d8f48}
	I0708 12:38:20.303794    2608 main.go:141] libmachine: Attempt 2
	I0708 12:38:20.303870    2608 main.go:141] libmachine: Searching for de:75:66:b4:8a:80 in /var/db/dhcpd_leases ...
	I0708 12:38:20.304158    2608 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0708 12:38:20.304208    2608 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:2a:1:f5:fb:91:b7 ID:1,2a:1:f5:fb:91:b7 Lease:0x668d90ef}
	I0708 12:38:20.304236    2608 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:82:c3:fb:64:cc:2e ID:1,82:c3:fb:64:cc:2e Lease:0x668c3f2e}
	I0708 12:38:20.304263    2608 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:f2:6f:8d:44:21:17 ID:1,f2:6f:8d:44:21:17 Lease:0x668c3efb}
	I0708 12:38:20.304290    2608 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x668d8f48}
	I0708 12:38:22.305023    2608 main.go:141] libmachine: Attempt 3
	I0708 12:38:22.305113    2608 main.go:141] libmachine: Searching for de:75:66:b4:8a:80 in /var/db/dhcpd_leases ...
	I0708 12:38:22.305187    2608 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0708 12:38:22.305205    2608 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:2a:1:f5:fb:91:b7 ID:1,2a:1:f5:fb:91:b7 Lease:0x668d90ef}
	I0708 12:38:22.305213    2608 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:82:c3:fb:64:cc:2e ID:1,82:c3:fb:64:cc:2e Lease:0x668c3f2e}
	I0708 12:38:22.305217    2608 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:f2:6f:8d:44:21:17 ID:1,f2:6f:8d:44:21:17 Lease:0x668c3efb}
	I0708 12:38:22.305224    2608 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x668d8f48}
	I0708 12:38:24.307227    2608 main.go:141] libmachine: Attempt 4
	I0708 12:38:24.307238    2608 main.go:141] libmachine: Searching for de:75:66:b4:8a:80 in /var/db/dhcpd_leases ...
	I0708 12:38:24.307274    2608 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0708 12:38:24.307280    2608 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:2a:1:f5:fb:91:b7 ID:1,2a:1:f5:fb:91:b7 Lease:0x668d90ef}
	I0708 12:38:24.307285    2608 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:82:c3:fb:64:cc:2e ID:1,82:c3:fb:64:cc:2e Lease:0x668c3f2e}
	I0708 12:38:24.307289    2608 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:f2:6f:8d:44:21:17 ID:1,f2:6f:8d:44:21:17 Lease:0x668c3efb}
	I0708 12:38:24.307293    2608 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x668d8f48}
	I0708 12:38:26.309304    2608 main.go:141] libmachine: Attempt 5
	I0708 12:38:26.309312    2608 main.go:141] libmachine: Searching for de:75:66:b4:8a:80 in /var/db/dhcpd_leases ...
	I0708 12:38:26.309362    2608 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0708 12:38:26.309369    2608 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:2a:1:f5:fb:91:b7 ID:1,2a:1:f5:fb:91:b7 Lease:0x668d90ef}
	I0708 12:38:26.309375    2608 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:82:c3:fb:64:cc:2e ID:1,82:c3:fb:64:cc:2e Lease:0x668c3f2e}
	I0708 12:38:26.309380    2608 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:f2:6f:8d:44:21:17 ID:1,f2:6f:8d:44:21:17 Lease:0x668c3efb}
	I0708 12:38:26.309386    2608 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x668d8f48}
	I0708 12:38:28.311415    2608 main.go:141] libmachine: Attempt 6
	I0708 12:38:28.311436    2608 main.go:141] libmachine: Searching for de:75:66:b4:8a:80 in /var/db/dhcpd_leases ...
	I0708 12:38:28.311530    2608 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0708 12:38:28.311543    2608 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:2a:1:f5:fb:91:b7 ID:1,2a:1:f5:fb:91:b7 Lease:0x668d90ef}
	I0708 12:38:28.311549    2608 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:82:c3:fb:64:cc:2e ID:1,82:c3:fb:64:cc:2e Lease:0x668c3f2e}
	I0708 12:38:28.311554    2608 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:f2:6f:8d:44:21:17 ID:1,f2:6f:8d:44:21:17 Lease:0x668c3efb}
	I0708 12:38:28.311559    2608 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x668d8f48}
	I0708 12:38:30.313601    2608 main.go:141] libmachine: Attempt 7
	I0708 12:38:30.313635    2608 main.go:141] libmachine: Searching for de:75:66:b4:8a:80 in /var/db/dhcpd_leases ...
	I0708 12:38:30.313712    2608 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0708 12:38:30.313726    2608 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:de:75:66:b4:8a:80 ID:1,de:75:66:b4:8a:80 Lease:0x668d91b4}
	I0708 12:38:30.313730    2608 main.go:141] libmachine: Found match: de:75:66:b4:8a:80
	I0708 12:38:30.313736    2608 main.go:141] libmachine: IP: 192.168.105.5
	I0708 12:38:30.313741    2608 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.5)...
	I0708 12:38:31.333088    2608 machine.go:94] provisionDockerMachine start ...
	I0708 12:38:31.333281    2608 main.go:141] libmachine: Using SSH client type: native
	I0708 12:38:31.333868    2608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104536920] 0x104539180 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0708 12:38:31.333886    2608 main.go:141] libmachine: About to run SSH command:
	hostname
	I0708 12:38:31.406308    2608 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0708 12:38:31.406338    2608 buildroot.go:166] provisioning hostname "ha-881000"
	I0708 12:38:31.406448    2608 main.go:141] libmachine: Using SSH client type: native
	I0708 12:38:31.406711    2608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104536920] 0x104539180 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0708 12:38:31.406723    2608 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-881000 && echo "ha-881000" | sudo tee /etc/hostname
	I0708 12:38:31.466282    2608 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-881000
	
	I0708 12:38:31.466334    2608 main.go:141] libmachine: Using SSH client type: native
	I0708 12:38:31.466477    2608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104536920] 0x104539180 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0708 12:38:31.466486    2608 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-881000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-881000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-881000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 12:38:31.522435    2608 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 12:38:31.522455    2608 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19195-1270/.minikube CaCertPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19195-1270/.minikube}
	I0708 12:38:31.522471    2608 buildroot.go:174] setting up certificates
	I0708 12:38:31.522479    2608 provision.go:84] configureAuth start
	I0708 12:38:31.522484    2608 provision.go:143] copyHostCerts
	I0708 12:38:31.522506    2608 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19195-1270/.minikube/cert.pem
	I0708 12:38:31.522567    2608 exec_runner.go:144] found /Users/jenkins/minikube-integration/19195-1270/.minikube/cert.pem, removing ...
	I0708 12:38:31.522573    2608 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19195-1270/.minikube/cert.pem
	I0708 12:38:31.522705    2608 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19195-1270/.minikube/cert.pem (1123 bytes)
	I0708 12:38:31.522894    2608 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19195-1270/.minikube/key.pem
	I0708 12:38:31.522924    2608 exec_runner.go:144] found /Users/jenkins/minikube-integration/19195-1270/.minikube/key.pem, removing ...
	I0708 12:38:31.522927    2608 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19195-1270/.minikube/key.pem
	I0708 12:38:31.523165    2608 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19195-1270/.minikube/key.pem (1675 bytes)
	I0708 12:38:31.523305    2608 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.pem
	I0708 12:38:31.523335    2608 exec_runner.go:144] found /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.pem, removing ...
	I0708 12:38:31.523339    2608 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.pem
	I0708 12:38:31.523401    2608 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.pem (1078 bytes)
	I0708 12:38:31.523514    2608 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca-key.pem org=jenkins.ha-881000 san=[127.0.0.1 192.168.105.5 ha-881000 localhost minikube]
	I0708 12:38:31.576261    2608 provision.go:177] copyRemoteCerts
	I0708 12:38:31.576300    2608 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 12:38:31.576307    2608 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/id_rsa Username:docker}
	I0708 12:38:31.604077    2608 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0708 12:38:31.604128    2608 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0708 12:38:31.612044    2608 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0708 12:38:31.612083    2608 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0708 12:38:31.619720    2608 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0708 12:38:31.619776    2608 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 12:38:31.627786    2608 provision.go:87] duration metric: took 105.302709ms to configureAuth
	I0708 12:38:31.627794    2608 buildroot.go:189] setting minikube options for container-runtime
	I0708 12:38:31.627914    2608 config.go:182] Loaded profile config "ha-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:38:31.627947    2608 main.go:141] libmachine: Using SSH client type: native
	I0708 12:38:31.628037    2608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104536920] 0x104539180 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0708 12:38:31.628042    2608 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0708 12:38:31.675499    2608 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0708 12:38:31.675507    2608 buildroot.go:70] root file system type: tmpfs
	I0708 12:38:31.675565    2608 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0708 12:38:31.675620    2608 main.go:141] libmachine: Using SSH client type: native
	I0708 12:38:31.675727    2608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104536920] 0x104539180 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0708 12:38:31.675759    2608 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0708 12:38:31.727379    2608 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0708 12:38:31.727426    2608 main.go:141] libmachine: Using SSH client type: native
	I0708 12:38:31.727534    2608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104536920] 0x104539180 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0708 12:38:31.727543    2608 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0708 12:38:33.115767    2608 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0708 12:38:33.115781    2608 machine.go:97] duration metric: took 1.78270525s to provisionDockerMachine
	I0708 12:38:33.115787    2608 client.go:171] duration metric: took 17.131217667s to LocalClient.Create
	I0708 12:38:33.115804    2608 start.go:167] duration metric: took 17.131265083s to libmachine.API.Create "ha-881000"
	I0708 12:38:33.115814    2608 start.go:293] postStartSetup for "ha-881000" (driver="qemu2")
	I0708 12:38:33.115820    2608 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 12:38:33.115898    2608 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 12:38:33.115911    2608 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/id_rsa Username:docker}
	I0708 12:38:33.144655    2608 ssh_runner.go:195] Run: cat /etc/os-release
	I0708 12:38:33.146222    2608 info.go:137] Remote host: Buildroot 2023.02.9
	I0708 12:38:33.146234    2608 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19195-1270/.minikube/addons for local assets ...
	I0708 12:38:33.146327    2608 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19195-1270/.minikube/files for local assets ...
	I0708 12:38:33.146449    2608 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem -> 17672.pem in /etc/ssl/certs
	I0708 12:38:33.146453    2608 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem -> /etc/ssl/certs/17672.pem
	I0708 12:38:33.146593    2608 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0708 12:38:33.150192    2608 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem --> /etc/ssl/certs/17672.pem (1708 bytes)
	I0708 12:38:33.158596    2608 start.go:296] duration metric: took 42.777875ms for postStartSetup
	I0708 12:38:33.159042    2608 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/config.json ...
	I0708 12:38:33.159237    2608 start.go:128] duration metric: took 17.203184959s to createHost
	I0708 12:38:33.159265    2608 main.go:141] libmachine: Using SSH client type: native
	I0708 12:38:33.159363    2608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104536920] 0x104539180 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0708 12:38:33.159367    2608 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0708 12:38:33.207148    2608 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720467513.094019252
	
	I0708 12:38:33.207156    2608 fix.go:216] guest clock: 1720467513.094019252
	I0708 12:38:33.207160    2608 fix.go:229] Guest: 2024-07-08 12:38:33.094019252 -0700 PDT Remote: 2024-07-08 12:38:33.15924 -0700 PDT m=+17.314887084 (delta=-65.220748ms)
	I0708 12:38:33.207170    2608 fix.go:200] guest clock delta is within tolerance: -65.220748ms
	I0708 12:38:33.207172    2608 start.go:83] releasing machines lock for "ha-881000", held for 17.251189208s
	I0708 12:38:33.207466    2608 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0708 12:38:33.207467    2608 ssh_runner.go:195] Run: cat /version.json
	I0708 12:38:33.207485    2608 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/id_rsa Username:docker}
	I0708 12:38:33.207492    2608 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/id_rsa Username:docker}
	I0708 12:38:33.232628    2608 ssh_runner.go:195] Run: systemctl --version
	I0708 12:38:33.235449    2608 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0708 12:38:33.276583    2608 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0708 12:38:33.276627    2608 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0708 12:38:33.282768    2608 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0708 12:38:33.282776    2608 start.go:494] detecting cgroup driver to use...
	I0708 12:38:33.282839    2608 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 12:38:33.289373    2608 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0708 12:38:33.293263    2608 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0708 12:38:33.297153    2608 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0708 12:38:33.297187    2608 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0708 12:38:33.301000    2608 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0708 12:38:33.305355    2608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0708 12:38:33.309297    2608 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0708 12:38:33.313143    2608 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0708 12:38:33.317082    2608 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0708 12:38:33.320955    2608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0708 12:38:33.324765    2608 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0708 12:38:33.328581    2608 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 12:38:33.332097    2608 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 12:38:33.335402    2608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 12:38:33.419115    2608 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0708 12:38:33.429726    2608 start.go:494] detecting cgroup driver to use...
	I0708 12:38:33.429798    2608 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0708 12:38:33.435985    2608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 12:38:33.447571    2608 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0708 12:38:33.456386    2608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 12:38:33.462145    2608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0708 12:38:33.467899    2608 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0708 12:38:33.505547    2608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0708 12:38:33.511702    2608 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 12:38:33.518023    2608 ssh_runner.go:195] Run: which cri-dockerd
	I0708 12:38:33.519477    2608 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0708 12:38:33.522562    2608 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0708 12:38:33.528751    2608 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0708 12:38:33.619699    2608 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0708 12:38:33.693696    2608 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0708 12:38:33.693762    2608 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0708 12:38:33.700285    2608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 12:38:33.784817    2608 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0708 12:39:34.675808    2608 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m0.892386042s)
	I0708 12:39:34.676105    2608 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0708 12:39:34.709170    2608 out.go:177] 
	W0708 12:39:34.714210    2608 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 08 19:38:31 ha-881000 systemd[1]: Starting Docker Application Container Engine...
	Jul 08 19:38:31 ha-881000 dockerd[515]: time="2024-07-08T19:38:31.845901918Z" level=info msg="Starting up"
	Jul 08 19:38:31 ha-881000 dockerd[515]: time="2024-07-08T19:38:31.846179543Z" level=info msg="containerd not running, starting managed containerd"
	Jul 08 19:38:31 ha-881000 dockerd[515]: time="2024-07-08T19:38:31.846617002Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=521
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.859055210Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.867723502Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.867736543Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.867756710Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.867763043Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.867792210Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.867797918Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.867864460Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.867877168Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.867882877Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.867887210Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.867912460Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.867995960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.868559335Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.868575877Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.868642377Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.868654127Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.868686710Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.868708793Z" level=info msg="metadata content store policy set" policy=shared
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.871425335Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.871446543Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.871453752Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.871459960Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.871466043Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.871497918Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.871747043Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.871863085Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.871889377Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.871909710Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.871928627Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.871946502Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.871964543Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.871982085Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872001960Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872021293Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872039460Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872056460Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872078335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872097335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872115543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872132335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872149502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872167127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872184168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872203293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872220668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872239543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872255460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872272418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872296877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872313043Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872325668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872337502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872345210Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872389418Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872405835Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872411335Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872418668Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872425252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872441502Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872451543Z" level=info msg="NRI interface is disabled by configuration."
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872592877Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872615335Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872629335Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872642210Z" level=info msg="containerd successfully booted in 0.013963s"
	Jul 08 19:38:32 ha-881000 dockerd[515]: time="2024-07-08T19:38:32.890784877Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 08 19:38:32 ha-881000 dockerd[515]: time="2024-07-08T19:38:32.901401877Z" level=info msg="Loading containers: start."
	Jul 08 19:38:32 ha-881000 dockerd[515]: time="2024-07-08T19:38:32.952689794Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 08 19:38:32 ha-881000 dockerd[515]: time="2024-07-08T19:38:32.981126127Z" level=info msg="Loading containers: done."
	Jul 08 19:38:32 ha-881000 dockerd[515]: time="2024-07-08T19:38:32.987287669Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 08 19:38:32 ha-881000 dockerd[515]: time="2024-07-08T19:38:32.987334669Z" level=info msg="Daemon has completed initialization"
	Jul 08 19:38:33 ha-881000 dockerd[515]: time="2024-07-08T19:38:33.000272627Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 08 19:38:33 ha-881000 systemd[1]: Started Docker Application Container Engine.
	Jul 08 19:38:33 ha-881000 dockerd[515]: time="2024-07-08T19:38:33.000630585Z" level=info msg="API listen on [::]:2376"
	Jul 08 19:38:33 ha-881000 dockerd[515]: time="2024-07-08T19:38:33.677262794Z" level=info msg="Processing signal 'terminated'"
	Jul 08 19:38:33 ha-881000 dockerd[515]: time="2024-07-08T19:38:33.677801419Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 08 19:38:33 ha-881000 dockerd[515]: time="2024-07-08T19:38:33.677892002Z" level=info msg="Daemon shutdown complete"
	Jul 08 19:38:33 ha-881000 dockerd[515]: time="2024-07-08T19:38:33.677931919Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 08 19:38:33 ha-881000 dockerd[515]: time="2024-07-08T19:38:33.677994002Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 08 19:38:33 ha-881000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 08 19:38:34 ha-881000 systemd[1]: docker.service: Deactivated successfully.
	Jul 08 19:38:34 ha-881000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 08 19:38:34 ha-881000 systemd[1]: Starting Docker Application Container Engine...
	Jul 08 19:38:34 ha-881000 dockerd[922]: time="2024-07-08T19:38:34.737652503Z" level=info msg="Starting up"
	Jul 08 19:39:34 ha-881000 dockerd[922]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 08 19:39:34 ha-881000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 08 19:39:34 ha-881000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 08 19:39:34 ha-881000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 08 19:38:31 ha-881000 systemd[1]: Starting Docker Application Container Engine...
	Jul 08 19:38:31 ha-881000 dockerd[515]: time="2024-07-08T19:38:31.845901918Z" level=info msg="Starting up"
	Jul 08 19:38:31 ha-881000 dockerd[515]: time="2024-07-08T19:38:31.846179543Z" level=info msg="containerd not running, starting managed containerd"
	Jul 08 19:38:31 ha-881000 dockerd[515]: time="2024-07-08T19:38:31.846617002Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=521
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.859055210Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.867723502Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.867736543Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.867756710Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.867763043Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.867792210Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.867797918Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.867864460Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.867877168Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.867882877Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.867887210Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.867912460Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.867995960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.868559335Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.868575877Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.868642377Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.868654127Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.868686710Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.868708793Z" level=info msg="metadata content store policy set" policy=shared
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.871425335Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.871446543Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.871453752Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.871459960Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.871466043Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.871497918Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.871747043Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.871863085Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.871889377Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.871909710Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.871928627Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.871946502Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.871964543Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.871982085Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872001960Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872021293Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872039460Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872056460Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872078335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872097335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872115543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872132335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872149502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872167127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872184168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872203293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872220668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872239543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872255460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872272418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872296877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872313043Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872325668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872337502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872345210Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872389418Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872405835Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872411335Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872418668Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872425252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872441502Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872451543Z" level=info msg="NRI interface is disabled by configuration."
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872592877Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872615335Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872629335Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 08 19:38:31 ha-881000 dockerd[521]: time="2024-07-08T19:38:31.872642210Z" level=info msg="containerd successfully booted in 0.013963s"
	Jul 08 19:38:32 ha-881000 dockerd[515]: time="2024-07-08T19:38:32.890784877Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 08 19:38:32 ha-881000 dockerd[515]: time="2024-07-08T19:38:32.901401877Z" level=info msg="Loading containers: start."
	Jul 08 19:38:32 ha-881000 dockerd[515]: time="2024-07-08T19:38:32.952689794Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 08 19:38:32 ha-881000 dockerd[515]: time="2024-07-08T19:38:32.981126127Z" level=info msg="Loading containers: done."
	Jul 08 19:38:32 ha-881000 dockerd[515]: time="2024-07-08T19:38:32.987287669Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 08 19:38:32 ha-881000 dockerd[515]: time="2024-07-08T19:38:32.987334669Z" level=info msg="Daemon has completed initialization"
	Jul 08 19:38:33 ha-881000 dockerd[515]: time="2024-07-08T19:38:33.000272627Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 08 19:38:33 ha-881000 systemd[1]: Started Docker Application Container Engine.
	Jul 08 19:38:33 ha-881000 dockerd[515]: time="2024-07-08T19:38:33.000630585Z" level=info msg="API listen on [::]:2376"
	Jul 08 19:38:33 ha-881000 dockerd[515]: time="2024-07-08T19:38:33.677262794Z" level=info msg="Processing signal 'terminated'"
	Jul 08 19:38:33 ha-881000 dockerd[515]: time="2024-07-08T19:38:33.677801419Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 08 19:38:33 ha-881000 dockerd[515]: time="2024-07-08T19:38:33.677892002Z" level=info msg="Daemon shutdown complete"
	Jul 08 19:38:33 ha-881000 dockerd[515]: time="2024-07-08T19:38:33.677931919Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 08 19:38:33 ha-881000 dockerd[515]: time="2024-07-08T19:38:33.677994002Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 08 19:38:33 ha-881000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 08 19:38:34 ha-881000 systemd[1]: docker.service: Deactivated successfully.
	Jul 08 19:38:34 ha-881000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 08 19:38:34 ha-881000 systemd[1]: Starting Docker Application Container Engine...
	Jul 08 19:38:34 ha-881000 dockerd[922]: time="2024-07-08T19:38:34.737652503Z" level=info msg="Starting up"
	Jul 08 19:39:34 ha-881000 dockerd[922]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 08 19:39:34 ha-881000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 08 19:39:34 ha-881000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 08 19:39:34 ha-881000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0708 12:39:34.714323    2608 out.go:239] * 
	* 
	W0708 12:39:34.715527    2608 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 12:39:34.730120    2608 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-881000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000: exit status 6 (91.542042ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0708 12:39:34.834156    2644 status.go:417] kubeconfig endpoint: get endpoint: "ha-881000" does not appear in /Users/jenkins/minikube-integration/19195-1270/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ha-881000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiControlPlane/serial/StartCluster (79.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (74.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-881000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-881000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (57.336833ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-881000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-881000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-881000 -- rollout status deployment/busybox: exit status 1 (56.169583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-881000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-881000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-881000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (55.654083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-881000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-881000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-881000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.753292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-881000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-881000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-881000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.944875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-881000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-881000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-881000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.415875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-881000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-881000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-881000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.381416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-881000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-881000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-881000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.100375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-881000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-881000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-881000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.2505ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-881000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-881000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-881000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.443833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-881000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-881000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-881000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.397792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-881000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-881000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-881000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.009833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-881000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-881000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-881000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (53.717417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-881000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-881000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-881000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.345042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-881000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-881000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-881000 -- exec  -- nslookup kubernetes.default: exit status 1 (55.596ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-881000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-881000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-881000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (55.466542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-881000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000: exit status 6 (69.452084ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0708 12:40:49.288267    2712 status.go:417] kubeconfig endpoint: get endpoint: "ha-881000" does not appear in /Users/jenkins/minikube-integration/19195-1270/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ha-881000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiControlPlane/serial/DeployApp (74.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-881000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-881000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.515833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-881000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000: exit status 6 (71.116792ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0708 12:40:49.416377    2717 status.go:417] kubeconfig endpoint: get endpoint: "ha-881000" does not appear in /Users/jenkins/minikube-integration/19195-1270/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ha-881000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-881000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-881000 -v=7 --alsologtostderr: exit status 103 (71.5995ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-881000 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p ha-881000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 12:40:49.449067    2719 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:40:49.449327    2719 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:40:49.449330    2719 out.go:304] Setting ErrFile to fd 2...
	I0708 12:40:49.449333    2719 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:40:49.449477    2719 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:40:49.449710    2719 mustload.go:65] Loading cluster: ha-881000
	I0708 12:40:49.449895    2719 config.go:182] Loaded profile config "ha-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:40:49.450592    2719 host.go:66] Checking if "ha-881000" exists ...
	I0708 12:40:49.450683    2719 api_server.go:166] Checking apiserver status ...
	I0708 12:40:49.450705    2719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 12:40:49.450712    2719 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/id_rsa Username:docker}
	W0708 12:40:49.479428    2719 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0708 12:40:49.483628    2719 out.go:177] * The control-plane node ha-881000 apiserver is not running: (state=Stopped)
	I0708 12:40:49.488524    2719 out.go:177]   To start a cluster, run: "minikube start -p ha-881000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-881000 -v=7 --alsologtostderr" : exit status 103
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000: exit status 6 (70.212625ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0708 12:40:49.557918    2721 status.go:417] kubeconfig endpoint: get endpoint: "ha-881000" does not appear in /Users/jenkins/minikube-integration/19195-1270/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ha-881000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-881000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-881000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.515ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-881000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-881000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-881000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000: exit status 6 (68.378417ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0708 12:40:49.653489    2724 status.go:417] kubeconfig endpoint: get endpoint: "ha-881000" does not appear in /Users/jenkins/minikube-integration/19195-1270/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ha-881000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-881000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-881000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-881000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-881000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-881000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-881000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-881000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-881000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"KubernetesVersion\"
:\"v1.30.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\
"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000: exit status 6 (68.247375ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0708 12:40:49.797878    2729 status.go:417] kubeconfig endpoint: get endpoint: "ha-881000" does not appear in /Users/jenkins/minikube-integration/19195-1270/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ha-881000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-881000 status --output json -v=7 --alsologtostderr: exit status 6 (70.327917ms)

                                                
                                                
-- stdout --
	{"Name":"ha-881000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Misconfigured","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 12:40:49.831612    2731 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:40:49.831769    2731 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:40:49.831772    2731 out.go:304] Setting ErrFile to fd 2...
	I0708 12:40:49.831774    2731 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:40:49.831916    2731 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:40:49.832040    2731 out.go:298] Setting JSON to true
	I0708 12:40:49.832051    2731 mustload.go:65] Loading cluster: ha-881000
	I0708 12:40:49.832120    2731 notify.go:220] Checking for updates...
	I0708 12:40:49.832250    2731 config.go:182] Loaded profile config "ha-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:40:49.832256    2731 status.go:255] checking status of ha-881000 ...
	I0708 12:40:49.832899    2731 status.go:330] ha-881000 host status = "Running" (err=<nil>)
	I0708 12:40:49.832907    2731 host.go:66] Checking if "ha-881000" exists ...
	I0708 12:40:49.833004    2731 host.go:66] Checking if "ha-881000" exists ...
	I0708 12:40:49.833120    2731 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 12:40:49.833127    2731 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/id_rsa Username:docker}
	I0708 12:40:49.861113    2731 ssh_runner.go:195] Run: systemctl --version
	I0708 12:40:49.863094    2731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	E0708 12:40:49.868810    2731 status.go:417] kubeconfig endpoint: get endpoint: "ha-881000" does not appear in /Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 12:40:49.868821    2731 api_server.go:166] Checking apiserver status ...
	I0708 12:40:49.868847    2731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0708 12:40:49.872911    2731 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0708 12:40:49.872920    2731 status.go:422] ha-881000 apiserver status = Stopped (err=<nil>)
	I0708 12:40:49.872924    2731 status.go:257] ha-881000 status: &{Name:ha-881000 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:328: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-881000 status --output json -v=7 --alsologtostderr" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000: exit status 6 (71.849792ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0708 12:40:49.940785    2733 status.go:417] kubeconfig endpoint: get endpoint: "ha-881000" does not appear in /Users/jenkins/minikube-integration/19195-1270/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ha-881000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-881000 node stop m02 -v=7 --alsologtostderr: exit status 85 (44.5565ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 12:40:49.974266    2735 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:40:49.974522    2735 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:40:49.974525    2735 out.go:304] Setting ErrFile to fd 2...
	I0708 12:40:49.974528    2735 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:40:49.974660    2735 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:40:49.974896    2735 mustload.go:65] Loading cluster: ha-881000
	I0708 12:40:49.975096    2735 config.go:182] Loaded profile config "ha-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:40:49.979112    2735 out.go:177] 
	W0708 12:40:49.980329    2735 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0708 12:40:49.980334    2735 out.go:239] * 
	* 
	W0708 12:40:49.981743    2735 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 12:40:49.985888    2735 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-881000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr: exit status 6 (71.458625ms)

                                                
                                                
-- stdout --
	ha-881000
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 12:40:50.018830    2737 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:40:50.018964    2737 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:40:50.018967    2737 out.go:304] Setting ErrFile to fd 2...
	I0708 12:40:50.018969    2737 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:40:50.019104    2737 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:40:50.019227    2737 out.go:298] Setting JSON to false
	I0708 12:40:50.019237    2737 mustload.go:65] Loading cluster: ha-881000
	I0708 12:40:50.019289    2737 notify.go:220] Checking for updates...
	I0708 12:40:50.019443    2737 config.go:182] Loaded profile config "ha-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:40:50.019450    2737 status.go:255] checking status of ha-881000 ...
	I0708 12:40:50.020138    2737 status.go:330] ha-881000 host status = "Running" (err=<nil>)
	I0708 12:40:50.020145    2737 host.go:66] Checking if "ha-881000" exists ...
	I0708 12:40:50.020248    2737 host.go:66] Checking if "ha-881000" exists ...
	I0708 12:40:50.020358    2737 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 12:40:50.020365    2737 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/id_rsa Username:docker}
	I0708 12:40:50.049424    2737 ssh_runner.go:195] Run: systemctl --version
	I0708 12:40:50.051390    2737 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	E0708 12:40:50.057240    2737 status.go:417] kubeconfig endpoint: get endpoint: "ha-881000" does not appear in /Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 12:40:50.057251    2737 api_server.go:166] Checking apiserver status ...
	I0708 12:40:50.057274    2737 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0708 12:40:50.061300    2737 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0708 12:40:50.061307    2737 status.go:422] ha-881000 apiserver status = Stopped (err=<nil>)
	I0708 12:40:50.061312    2737 status.go:257] ha-881000 status: &{Name:ha-881000 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000: exit status 6 (68.769916ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0708 12:40:50.125781    2739 status.go:417] kubeconfig endpoint: get endpoint: "ha-881000" does not appear in /Users/jenkins/minikube-integration/19195-1270/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ha-881000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-881000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-881000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-881000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-881000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"KubernetesVersio
n\":\"v1.30.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\
",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000: exit status 6 (69.482459ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0708 12:40:50.270824    2744 status.go:417] kubeconfig endpoint: get endpoint: "ha-881000" does not appear in /Users/jenkins/minikube-integration/19195-1270/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ha-881000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (53.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-881000 node start m02 -v=7 --alsologtostderr: exit status 85 (46.496ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 12:40:50.304344    2746 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:40:50.304591    2746 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:40:50.304594    2746 out.go:304] Setting ErrFile to fd 2...
	I0708 12:40:50.304597    2746 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:40:50.304735    2746 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:40:50.304972    2746 mustload.go:65] Loading cluster: ha-881000
	I0708 12:40:50.305169    2746 config.go:182] Loaded profile config "ha-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:40:50.309407    2746 out.go:177] 
	W0708 12:40:50.312347    2746 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0708 12:40:50.312351    2746 out.go:239] * 
	* 
	W0708 12:40:50.313915    2746 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 12:40:50.318363    2746 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0708 12:40:50.304344    2746 out.go:291] Setting OutFile to fd 1 ...
I0708 12:40:50.304591    2746 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0708 12:40:50.304594    2746 out.go:304] Setting ErrFile to fd 2...
I0708 12:40:50.304597    2746 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0708 12:40:50.304735    2746 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
I0708 12:40:50.304972    2746 mustload.go:65] Loading cluster: ha-881000
I0708 12:40:50.305169    2746 config.go:182] Loaded profile config "ha-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0708 12:40:50.309407    2746 out.go:177] 
W0708 12:40:50.312347    2746 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0708 12:40:50.312351    2746 out.go:239] * 
* 
W0708 12:40:50.313915    2746 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0708 12:40:50.318363    2746 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-881000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr: exit status 6 (69.250625ms)

                                                
                                                
-- stdout --
	ha-881000
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 12:40:50.350484    2748 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:40:50.350632    2748 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:40:50.350635    2748 out.go:304] Setting ErrFile to fd 2...
	I0708 12:40:50.350638    2748 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:40:50.350761    2748 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:40:50.350882    2748 out.go:298] Setting JSON to false
	I0708 12:40:50.350895    2748 mustload.go:65] Loading cluster: ha-881000
	I0708 12:40:50.350939    2748 notify.go:220] Checking for updates...
	I0708 12:40:50.351084    2748 config.go:182] Loaded profile config "ha-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:40:50.351090    2748 status.go:255] checking status of ha-881000 ...
	I0708 12:40:50.351768    2748 status.go:330] ha-881000 host status = "Running" (err=<nil>)
	I0708 12:40:50.351776    2748 host.go:66] Checking if "ha-881000" exists ...
	I0708 12:40:50.351877    2748 host.go:66] Checking if "ha-881000" exists ...
	I0708 12:40:50.351985    2748 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 12:40:50.351993    2748 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/id_rsa Username:docker}
	I0708 12:40:50.378855    2748 ssh_runner.go:195] Run: systemctl --version
	I0708 12:40:50.380961    2748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	E0708 12:40:50.386772    2748 status.go:417] kubeconfig endpoint: get endpoint: "ha-881000" does not appear in /Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 12:40:50.386785    2748 api_server.go:166] Checking apiserver status ...
	I0708 12:40:50.386807    2748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0708 12:40:50.391499    2748 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0708 12:40:50.391508    2748 status.go:422] ha-881000 apiserver status = Stopped (err=<nil>)
	I0708 12:40:50.391512    2748 status.go:257] ha-881000 status: &{Name:ha-881000 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr: exit status 6 (116.774541ms)

                                                
                                                
-- stdout --
	ha-881000
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 12:40:51.041740    2752 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:40:51.041929    2752 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:40:51.041934    2752 out.go:304] Setting ErrFile to fd 2...
	I0708 12:40:51.041937    2752 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:40:51.042116    2752 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:40:51.042276    2752 out.go:298] Setting JSON to false
	I0708 12:40:51.042289    2752 mustload.go:65] Loading cluster: ha-881000
	I0708 12:40:51.042331    2752 notify.go:220] Checking for updates...
	I0708 12:40:51.042558    2752 config.go:182] Loaded profile config "ha-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:40:51.042566    2752 status.go:255] checking status of ha-881000 ...
	I0708 12:40:51.043436    2752 status.go:330] ha-881000 host status = "Running" (err=<nil>)
	I0708 12:40:51.043446    2752 host.go:66] Checking if "ha-881000" exists ...
	I0708 12:40:51.043589    2752 host.go:66] Checking if "ha-881000" exists ...
	I0708 12:40:51.043738    2752 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 12:40:51.043747    2752 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/id_rsa Username:docker}
	I0708 12:40:51.073049    2752 ssh_runner.go:195] Run: systemctl --version
	I0708 12:40:51.075244    2752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	E0708 12:40:51.081892    2752 status.go:417] kubeconfig endpoint: get endpoint: "ha-881000" does not appear in /Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 12:40:51.081912    2752 api_server.go:166] Checking apiserver status ...
	I0708 12:40:51.081938    2752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0708 12:40:51.086618    2752 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0708 12:40:51.086627    2752 status.go:422] ha-881000 apiserver status = Stopped (err=<nil>)
	I0708 12:40:51.086631    2752 status.go:257] ha-881000 status: &{Name:ha-881000 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0708 12:40:52.066731    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/addons-443000/client.crt: no such file or directory
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr: exit status 6 (114.324917ms)

                                                
                                                
-- stdout --
	ha-881000
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 12:40:52.491977    2754 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:40:52.492194    2754 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:40:52.492198    2754 out.go:304] Setting ErrFile to fd 2...
	I0708 12:40:52.492202    2754 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:40:52.492362    2754 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:40:52.492519    2754 out.go:298] Setting JSON to false
	I0708 12:40:52.492532    2754 mustload.go:65] Loading cluster: ha-881000
	I0708 12:40:52.492569    2754 notify.go:220] Checking for updates...
	I0708 12:40:52.492792    2754 config.go:182] Loaded profile config "ha-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:40:52.492800    2754 status.go:255] checking status of ha-881000 ...
	I0708 12:40:52.493672    2754 status.go:330] ha-881000 host status = "Running" (err=<nil>)
	I0708 12:40:52.493681    2754 host.go:66] Checking if "ha-881000" exists ...
	I0708 12:40:52.493815    2754 host.go:66] Checking if "ha-881000" exists ...
	I0708 12:40:52.493963    2754 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 12:40:52.493974    2754 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/id_rsa Username:docker}
	I0708 12:40:52.522224    2754 ssh_runner.go:195] Run: systemctl --version
	I0708 12:40:52.524441    2754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	E0708 12:40:52.530880    2754 status.go:417] kubeconfig endpoint: get endpoint: "ha-881000" does not appear in /Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 12:40:52.530895    2754 api_server.go:166] Checking apiserver status ...
	I0708 12:40:52.530919    2754 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0708 12:40:52.535848    2754 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0708 12:40:52.535860    2754 status.go:422] ha-881000 apiserver status = Stopped (err=<nil>)
	I0708 12:40:52.535868    2754 status.go:257] ha-881000 status: &{Name:ha-881000 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr: exit status 6 (115.791166ms)

                                                
                                                
-- stdout --
	ha-881000
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 12:40:54.086791    2756 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:40:54.087009    2756 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:40:54.087014    2756 out.go:304] Setting ErrFile to fd 2...
	I0708 12:40:54.087017    2756 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:40:54.087179    2756 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:40:54.087356    2756 out.go:298] Setting JSON to false
	I0708 12:40:54.087370    2756 mustload.go:65] Loading cluster: ha-881000
	I0708 12:40:54.087412    2756 notify.go:220] Checking for updates...
	I0708 12:40:54.087613    2756 config.go:182] Loaded profile config "ha-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:40:54.087621    2756 status.go:255] checking status of ha-881000 ...
	I0708 12:40:54.088493    2756 status.go:330] ha-881000 host status = "Running" (err=<nil>)
	I0708 12:40:54.088508    2756 host.go:66] Checking if "ha-881000" exists ...
	I0708 12:40:54.088679    2756 host.go:66] Checking if "ha-881000" exists ...
	I0708 12:40:54.088833    2756 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 12:40:54.088842    2756 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/id_rsa Username:docker}
	I0708 12:40:54.117452    2756 ssh_runner.go:195] Run: systemctl --version
	I0708 12:40:54.119772    2756 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	E0708 12:40:54.126256    2756 status.go:417] kubeconfig endpoint: get endpoint: "ha-881000" does not appear in /Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 12:40:54.126273    2756 api_server.go:166] Checking apiserver status ...
	I0708 12:40:54.126298    2756 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0708 12:40:54.130987    2756 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0708 12:40:54.130994    2756 status.go:422] ha-881000 apiserver status = Stopped (err=<nil>)
	I0708 12:40:54.130999    2756 status.go:257] ha-881000 status: &{Name:ha-881000 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr: exit status 6 (116.890542ms)

                                                
                                                
-- stdout --
	ha-881000
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 12:40:56.354600    2759 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:40:56.354830    2759 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:40:56.354835    2759 out.go:304] Setting ErrFile to fd 2...
	I0708 12:40:56.354839    2759 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:40:56.355014    2759 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:40:56.355206    2759 out.go:298] Setting JSON to false
	I0708 12:40:56.355222    2759 mustload.go:65] Loading cluster: ha-881000
	I0708 12:40:56.355261    2759 notify.go:220] Checking for updates...
	I0708 12:40:56.355545    2759 config.go:182] Loaded profile config "ha-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:40:56.355556    2759 status.go:255] checking status of ha-881000 ...
	I0708 12:40:56.356495    2759 status.go:330] ha-881000 host status = "Running" (err=<nil>)
	I0708 12:40:56.356516    2759 host.go:66] Checking if "ha-881000" exists ...
	I0708 12:40:56.356679    2759 host.go:66] Checking if "ha-881000" exists ...
	I0708 12:40:56.356839    2759 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 12:40:56.356849    2759 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/id_rsa Username:docker}
	I0708 12:40:56.385863    2759 ssh_runner.go:195] Run: systemctl --version
	I0708 12:40:56.388043    2759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	E0708 12:40:56.394271    2759 status.go:417] kubeconfig endpoint: get endpoint: "ha-881000" does not appear in /Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 12:40:56.394285    2759 api_server.go:166] Checking apiserver status ...
	I0708 12:40:56.394308    2759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0708 12:40:56.398971    2759 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0708 12:40:56.398983    2759 status.go:422] ha-881000 apiserver status = Stopped (err=<nil>)
	I0708 12:40:56.398988    2759 status.go:257] ha-881000 status: &{Name:ha-881000 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr: exit status 6 (116.235834ms)

                                                
                                                
-- stdout --
	ha-881000
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 12:41:01.803924    2761 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:41:01.804109    2761 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:41:01.804114    2761 out.go:304] Setting ErrFile to fd 2...
	I0708 12:41:01.804117    2761 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:41:01.804335    2761 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:41:01.804482    2761 out.go:298] Setting JSON to false
	I0708 12:41:01.804497    2761 mustload.go:65] Loading cluster: ha-881000
	I0708 12:41:01.804527    2761 notify.go:220] Checking for updates...
	I0708 12:41:01.804751    2761 config.go:182] Loaded profile config "ha-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:41:01.804761    2761 status.go:255] checking status of ha-881000 ...
	I0708 12:41:01.805621    2761 status.go:330] ha-881000 host status = "Running" (err=<nil>)
	I0708 12:41:01.805629    2761 host.go:66] Checking if "ha-881000" exists ...
	I0708 12:41:01.805752    2761 host.go:66] Checking if "ha-881000" exists ...
	I0708 12:41:01.805890    2761 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 12:41:01.805900    2761 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/id_rsa Username:docker}
	I0708 12:41:01.834800    2761 ssh_runner.go:195] Run: systemctl --version
	I0708 12:41:01.837138    2761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	E0708 12:41:01.843115    2761 status.go:417] kubeconfig endpoint: get endpoint: "ha-881000" does not appear in /Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 12:41:01.843126    2761 api_server.go:166] Checking apiserver status ...
	I0708 12:41:01.843152    2761 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0708 12:41:01.847650    2761 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0708 12:41:01.847660    2761 status.go:422] ha-881000 apiserver status = Stopped (err=<nil>)
	I0708 12:41:01.847664    2761 status.go:257] ha-881000 status: &{Name:ha-881000 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr: exit status 6 (115.188292ms)

                                                
                                                
-- stdout --
	ha-881000
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 12:41:07.075060    2763 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:41:07.075296    2763 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:41:07.075300    2763 out.go:304] Setting ErrFile to fd 2...
	I0708 12:41:07.075304    2763 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:41:07.075453    2763 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:41:07.075644    2763 out.go:298] Setting JSON to false
	I0708 12:41:07.075660    2763 mustload.go:65] Loading cluster: ha-881000
	I0708 12:41:07.075698    2763 notify.go:220] Checking for updates...
	I0708 12:41:07.075931    2763 config.go:182] Loaded profile config "ha-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:41:07.075941    2763 status.go:255] checking status of ha-881000 ...
	I0708 12:41:07.076843    2763 status.go:330] ha-881000 host status = "Running" (err=<nil>)
	I0708 12:41:07.076854    2763 host.go:66] Checking if "ha-881000" exists ...
	I0708 12:41:07.077001    2763 host.go:66] Checking if "ha-881000" exists ...
	I0708 12:41:07.077148    2763 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 12:41:07.077159    2763 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/id_rsa Username:docker}
	I0708 12:41:07.106405    2763 ssh_runner.go:195] Run: systemctl --version
	I0708 12:41:07.108550    2763 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	E0708 12:41:07.114649    2763 status.go:417] kubeconfig endpoint: get endpoint: "ha-881000" does not appear in /Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 12:41:07.114667    2763 api_server.go:166] Checking apiserver status ...
	I0708 12:41:07.114697    2763 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0708 12:41:07.118994    2763 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0708 12:41:07.119005    2763 status.go:422] ha-881000 apiserver status = Stopped (err=<nil>)
	I0708 12:41:07.119010    2763 status.go:257] ha-881000 status: &{Name:ha-881000 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr: exit status 6 (117.288625ms)

                                                
                                                
-- stdout --
	ha-881000
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 12:41:18.337631    2765 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:41:18.337844    2765 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:41:18.337849    2765 out.go:304] Setting ErrFile to fd 2...
	I0708 12:41:18.337851    2765 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:41:18.338033    2765 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:41:18.338189    2765 out.go:298] Setting JSON to false
	I0708 12:41:18.338206    2765 mustload.go:65] Loading cluster: ha-881000
	I0708 12:41:18.338236    2765 notify.go:220] Checking for updates...
	I0708 12:41:18.338478    2765 config.go:182] Loaded profile config "ha-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:41:18.338489    2765 status.go:255] checking status of ha-881000 ...
	I0708 12:41:18.339421    2765 status.go:330] ha-881000 host status = "Running" (err=<nil>)
	I0708 12:41:18.339431    2765 host.go:66] Checking if "ha-881000" exists ...
	I0708 12:41:18.339584    2765 host.go:66] Checking if "ha-881000" exists ...
	I0708 12:41:18.339736    2765 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 12:41:18.339746    2765 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/id_rsa Username:docker}
	I0708 12:41:18.368542    2765 ssh_runner.go:195] Run: systemctl --version
	I0708 12:41:18.370763    2765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	E0708 12:41:18.376841    2765 status.go:417] kubeconfig endpoint: get endpoint: "ha-881000" does not appear in /Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 12:41:18.376855    2765 api_server.go:166] Checking apiserver status ...
	I0708 12:41:18.376879    2765 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0708 12:41:18.381293    2765 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0708 12:41:18.381300    2765 status.go:422] ha-881000 apiserver status = Stopped (err=<nil>)
	I0708 12:41:18.381305    2765 status.go:257] ha-881000 status: &{Name:ha-881000 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0708 12:41:19.772430    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/addons-443000/client.crt: no such file or directory
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr: exit status 6 (116.692166ms)

                                                
                                                
-- stdout --
	ha-881000
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 12:41:29.709315    2769 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:41:29.709576    2769 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:41:29.709581    2769 out.go:304] Setting ErrFile to fd 2...
	I0708 12:41:29.709585    2769 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:41:29.709774    2769 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:41:29.709956    2769 out.go:298] Setting JSON to false
	I0708 12:41:29.709974    2769 mustload.go:65] Loading cluster: ha-881000
	I0708 12:41:29.710018    2769 notify.go:220] Checking for updates...
	I0708 12:41:29.710257    2769 config.go:182] Loaded profile config "ha-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:41:29.710265    2769 status.go:255] checking status of ha-881000 ...
	I0708 12:41:29.711207    2769 status.go:330] ha-881000 host status = "Running" (err=<nil>)
	I0708 12:41:29.711217    2769 host.go:66] Checking if "ha-881000" exists ...
	I0708 12:41:29.711350    2769 host.go:66] Checking if "ha-881000" exists ...
	I0708 12:41:29.711497    2769 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 12:41:29.711507    2769 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/id_rsa Username:docker}
	I0708 12:41:29.740822    2769 ssh_runner.go:195] Run: systemctl --version
	I0708 12:41:29.743237    2769 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	E0708 12:41:29.750257    2769 status.go:417] kubeconfig endpoint: get endpoint: "ha-881000" does not appear in /Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 12:41:29.750271    2769 api_server.go:166] Checking apiserver status ...
	I0708 12:41:29.750306    2769 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0708 12:41:29.754965    2769 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0708 12:41:29.754973    2769 status.go:422] ha-881000 apiserver status = Stopped (err=<nil>)
	I0708 12:41:29.754978    2769 status.go:257] ha-881000 status: &{Name:ha-881000 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr: exit status 6 (120.964875ms)

                                                
                                                
-- stdout --
	ha-881000
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 12:41:43.306971    2777 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:41:43.307210    2777 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:41:43.307214    2777 out.go:304] Setting ErrFile to fd 2...
	I0708 12:41:43.307217    2777 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:41:43.307401    2777 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:41:43.307584    2777 out.go:298] Setting JSON to false
	I0708 12:41:43.307601    2777 mustload.go:65] Loading cluster: ha-881000
	I0708 12:41:43.307634    2777 notify.go:220] Checking for updates...
	I0708 12:41:43.307859    2777 config.go:182] Loaded profile config "ha-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:41:43.307867    2777 status.go:255] checking status of ha-881000 ...
	I0708 12:41:43.308800    2777 status.go:330] ha-881000 host status = "Running" (err=<nil>)
	I0708 12:41:43.308822    2777 host.go:66] Checking if "ha-881000" exists ...
	I0708 12:41:43.309007    2777 host.go:66] Checking if "ha-881000" exists ...
	I0708 12:41:43.309150    2777 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 12:41:43.309159    2777 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/id_rsa Username:docker}
	I0708 12:41:43.342564    2777 ssh_runner.go:195] Run: systemctl --version
	I0708 12:41:43.344932    2777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	E0708 12:41:43.351470    2777 status.go:417] kubeconfig endpoint: get endpoint: "ha-881000" does not appear in /Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 12:41:43.351481    2777 api_server.go:166] Checking apiserver status ...
	I0708 12:41:43.351502    2777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0708 12:41:43.356271    2777 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0708 12:41:43.356279    2777 status.go:422] ha-881000 apiserver status = Stopped (err=<nil>)
	I0708 12:41:43.356283    2777 status.go:257] ha-881000 status: &{Name:ha-881000 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000: exit status 6 (68.839042ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0708 12:41:43.421029    2779 status.go:417] kubeconfig endpoint: get endpoint: "ha-881000" does not appear in /Users/jenkins/minikube-integration/19195-1270/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ha-881000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (53.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-881000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-881000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-881000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-881000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-881000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-881000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-881000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-881000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"KubernetesVersion\"
:\"v1.30.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\
"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000: exit status 6 (69.1545ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0708 12:41:43.565625    2784 status.go:417] kubeconfig endpoint: get endpoint: "ha-881000" does not appear in /Users/jenkins/minikube-integration/19195-1270/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ha-881000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-881000 node delete m03 -v=7 --alsologtostderr: exit status 80 (93.471916ms)

                                                
                                                
-- stdout --
	* Deleting node m03 from cluster ha-881000
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 12:43:36.047198    2818 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:43:36.047423    2818 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:43:36.047427    2818 out.go:304] Setting ErrFile to fd 2...
	I0708 12:43:36.047429    2818 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:43:36.047567    2818 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:43:36.047785    2818 mustload.go:65] Loading cluster: ha-881000
	I0708 12:43:36.047965    2818 config.go:182] Loaded profile config "ha-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:43:36.048706    2818 host.go:66] Checking if "ha-881000" exists ...
	I0708 12:43:36.048814    2818 api_server.go:166] Checking apiserver status ...
	I0708 12:43:36.048844    2818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 12:43:36.048852    2818 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/id_rsa Username:docker}
	I0708 12:43:36.082040    2818 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2008/cgroup
	W0708 12:43:36.085726    2818 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2008/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0708 12:43:36.085773    2818 ssh_runner.go:195] Run: ls
	I0708 12:43:36.087425    2818 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I0708 12:43:36.090293    2818 api_server.go:279] https://192.168.105.5:8443/healthz returned 200:
	ok
	I0708 12:43:36.094507    2818 out.go:177] * Deleting node m03 from cluster ha-881000
	I0708 12:43:36.098560    2818 out.go:177] 
	W0708 12:43:36.102501    2818 out.go:239] X Exiting due to GUEST_NODE_DELETE: deleting node: retrieve node: Could not find node m03
	X Exiting due to GUEST_NODE_DELETE: deleting node: retrieve node: Could not find node m03
	W0708 12:43:36.102506    2818 out.go:239] * 
	* 
	W0708 12:43:36.104000    2818 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_494011a6b05fec7d81170870a2aee2ef446d16a4_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_494011a6b05fec7d81170870a2aee2ef446d16a4_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 12:43:36.108473    2818 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-881000 node delete m03 -v=7 --alsologtostderr": exit status 80
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr
ha_test.go:498: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr": ha-881000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:501: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr": ha-881000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:504: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr": ha-881000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:507: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr": ha-881000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
ha_test.go:524: expected 3 nodes Ready status to be True, got 
-- stdout --
	' True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 logs -n 25
helpers_test.go:252: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-881000 -- apply -f             | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:39 PDT |                     |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- rollout status       | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:39 PDT |                     |
	|         | deployment/busybox                   |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:39 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:39 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:39 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:39 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:39 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:39 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:39 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- exec  --             | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- exec  --             | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- exec  -- nslookup    | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| node    | add -p ha-881000 -v=7                | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-881000 node stop m02 -v=7         | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-881000 node start m02 -v=7        | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-881000 -v=7               | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:41 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| stop    | -p ha-881000 -v=7                    | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:41 PDT | 08 Jul 24 12:42 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| start   | -p ha-881000 --wait=true -v=7        | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:42 PDT | 08 Jul 24 12:43 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-881000                    | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:43 PDT |                     |
	| node    | ha-881000 node delete m03 -v=7       | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:43 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/08 12:42:37
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0708 12:42:37.929795    2792 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:42:37.929956    2792 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:42:37.929961    2792 out.go:304] Setting ErrFile to fd 2...
	I0708 12:42:37.929964    2792 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:42:37.930126    2792 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:42:37.931417    2792 out.go:298] Setting JSON to false
	I0708 12:42:37.950421    2792 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2525,"bootTime":1720465232,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0708 12:42:37.950488    2792 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0708 12:42:37.955594    2792 out.go:177] * [ha-881000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0708 12:42:37.961390    2792 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 12:42:37.961418    2792 notify.go:220] Checking for updates...
	I0708 12:42:37.969375    2792 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 12:42:37.973398    2792 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0708 12:42:37.974740    2792 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 12:42:37.977341    2792 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	I0708 12:42:37.980341    2792 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 12:42:37.983678    2792 config.go:182] Loaded profile config "ha-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:42:37.983736    2792 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 12:42:37.988290    2792 out.go:177] * Using the qemu2 driver based on existing profile
	I0708 12:42:37.995370    2792 start.go:297] selected driver: qemu2
	I0708 12:42:37.995378    2792 start.go:901] validating driver "qemu2" against &{Name:ha-881000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.2 ClusterName:ha-881000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 12:42:37.995437    2792 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 12:42:37.997691    2792 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 12:42:37.997741    2792 cni.go:84] Creating CNI manager for ""
	I0708 12:42:37.997746    2792 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0708 12:42:37.997797    2792 start.go:340] cluster config:
	{Name:ha-881000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-881000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 12:42:38.001327    2792 iso.go:125] acquiring lock: {Name:mk0270d312faa6a295feea241390baaf586d8510 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 12:42:38.008296    2792 out.go:177] * Starting "ha-881000" primary control-plane node in "ha-881000" cluster
	I0708 12:42:38.012364    2792 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0708 12:42:38.012385    2792 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0708 12:42:38.012393    2792 cache.go:56] Caching tarball of preloaded images
	I0708 12:42:38.012464    2792 preload.go:173] Found /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0708 12:42:38.012471    2792 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0708 12:42:38.012532    2792 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/config.json ...
	I0708 12:42:38.012953    2792 start.go:360] acquireMachinesLock for ha-881000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 12:42:38.012989    2792 start.go:364] duration metric: took 29.417µs to acquireMachinesLock for "ha-881000"
	I0708 12:42:38.012997    2792 start.go:96] Skipping create...Using existing machine configuration
	I0708 12:42:38.013004    2792 fix.go:54] fixHost starting: 
	I0708 12:42:38.013127    2792 fix.go:112] recreateIfNeeded on ha-881000: state=Stopped err=<nil>
	W0708 12:42:38.013136    2792 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 12:42:38.020265    2792 out.go:177] * Restarting existing qemu2 VM for "ha-881000" ...
	I0708 12:42:38.024422    2792 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:75:66:b4:8a:80 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/disk.qcow2
	I0708 12:42:38.064421    2792 main.go:141] libmachine: STDOUT: 
	I0708 12:42:38.064451    2792 main.go:141] libmachine: STDERR: 
	I0708 12:42:38.064456    2792 main.go:141] libmachine: Attempt 0
	I0708 12:42:38.064467    2792 main.go:141] libmachine: Searching for de:75:66:b4:8a:80 in /var/db/dhcpd_leases ...
	I0708 12:42:38.064527    2792 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0708 12:42:38.064545    2792 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:de:75:66:b4:8a:80 ID:1,de:75:66:b4:8a:80 Lease:0x668c412b}
	I0708 12:42:38.064549    2792 main.go:141] libmachine: Found match: de:75:66:b4:8a:80
	I0708 12:42:38.064553    2792 main.go:141] libmachine: IP: 192.168.105.5
	I0708 12:42:38.064557    2792 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.5)...
	I0708 12:42:57.605102    2792 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/config.json ...
	I0708 12:42:57.605793    2792 machine.go:94] provisionDockerMachine start ...
	I0708 12:42:57.605982    2792 main.go:141] libmachine: Using SSH client type: native
	I0708 12:42:57.606471    2792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c66920] 0x102c69180 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0708 12:42:57.606485    2792 main.go:141] libmachine: About to run SSH command:
	hostname
	I0708 12:42:57.682410    2792 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0708 12:42:57.682463    2792 buildroot.go:166] provisioning hostname "ha-881000"
	I0708 12:42:57.682564    2792 main.go:141] libmachine: Using SSH client type: native
	I0708 12:42:57.682825    2792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c66920] 0x102c69180 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0708 12:42:57.682837    2792 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-881000 && echo "ha-881000" | sudo tee /etc/hostname
	I0708 12:42:57.754602    2792 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-881000
	
	I0708 12:42:57.754677    2792 main.go:141] libmachine: Using SSH client type: native
	I0708 12:42:57.754847    2792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c66920] 0x102c69180 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0708 12:42:57.754860    2792 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-881000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-881000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-881000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 12:42:57.814080    2792 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 12:42:57.814095    2792 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19195-1270/.minikube CaCertPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19195-1270/.minikube}
	I0708 12:42:57.814110    2792 buildroot.go:174] setting up certificates
	I0708 12:42:57.814119    2792 provision.go:84] configureAuth start
	I0708 12:42:57.814126    2792 provision.go:143] copyHostCerts
	I0708 12:42:57.814148    2792 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19195-1270/.minikube/cert.pem
	I0708 12:42:57.814214    2792 exec_runner.go:144] found /Users/jenkins/minikube-integration/19195-1270/.minikube/cert.pem, removing ...
	I0708 12:42:57.814220    2792 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19195-1270/.minikube/cert.pem
	I0708 12:42:57.814354    2792 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19195-1270/.minikube/cert.pem (1123 bytes)
	I0708 12:42:57.814547    2792 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19195-1270/.minikube/key.pem
	I0708 12:42:57.814576    2792 exec_runner.go:144] found /Users/jenkins/minikube-integration/19195-1270/.minikube/key.pem, removing ...
	I0708 12:42:57.814580    2792 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19195-1270/.minikube/key.pem
	I0708 12:42:57.814683    2792 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19195-1270/.minikube/key.pem (1675 bytes)
	I0708 12:42:57.814819    2792 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.pem
	I0708 12:42:57.814851    2792 exec_runner.go:144] found /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.pem, removing ...
	I0708 12:42:57.814855    2792 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.pem
	I0708 12:42:57.814933    2792 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.pem (1078 bytes)
	I0708 12:42:57.815103    2792 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca-key.pem org=jenkins.ha-881000 san=[127.0.0.1 192.168.105.5 ha-881000 localhost minikube]
	I0708 12:42:57.899167    2792 provision.go:177] copyRemoteCerts
	I0708 12:42:57.899194    2792 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 12:42:57.899201    2792 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/id_rsa Username:docker}
	I0708 12:42:57.927671    2792 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0708 12:42:57.927712    2792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 12:42:57.935956    2792 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0708 12:42:57.936005    2792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0708 12:42:57.943804    2792 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0708 12:42:57.943837    2792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0708 12:42:57.951970    2792 provision.go:87] duration metric: took 137.847333ms to configureAuth
	I0708 12:42:57.951978    2792 buildroot.go:189] setting minikube options for container-runtime
	I0708 12:42:57.952085    2792 config.go:182] Loaded profile config "ha-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:42:57.952113    2792 main.go:141] libmachine: Using SSH client type: native
	I0708 12:42:57.952210    2792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c66920] 0x102c69180 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0708 12:42:57.952214    2792 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0708 12:42:58.005015    2792 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0708 12:42:58.005022    2792 buildroot.go:70] root file system type: tmpfs
	I0708 12:42:58.005079    2792 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0708 12:42:58.005112    2792 main.go:141] libmachine: Using SSH client type: native
	I0708 12:42:58.005198    2792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c66920] 0x102c69180 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0708 12:42:58.005231    2792 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0708 12:42:58.062255    2792 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0708 12:42:58.062306    2792 main.go:141] libmachine: Using SSH client type: native
	I0708 12:42:58.062412    2792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c66920] 0x102c69180 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0708 12:42:58.062420    2792 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0708 12:42:59.459311    2792 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0708 12:42:59.459323    2792 machine.go:97] duration metric: took 1.853564625s to provisionDockerMachine
	I0708 12:42:59.459331    2792 start.go:293] postStartSetup for "ha-881000" (driver="qemu2")
	I0708 12:42:59.459338    2792 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 12:42:59.459407    2792 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 12:42:59.459418    2792 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/id_rsa Username:docker}
	I0708 12:42:59.490481    2792 ssh_runner.go:195] Run: cat /etc/os-release
	I0708 12:42:59.491811    2792 info.go:137] Remote host: Buildroot 2023.02.9
	I0708 12:42:59.491818    2792 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19195-1270/.minikube/addons for local assets ...
	I0708 12:42:59.491918    2792 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19195-1270/.minikube/files for local assets ...
	I0708 12:42:59.492051    2792 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem -> 17672.pem in /etc/ssl/certs
	I0708 12:42:59.492056    2792 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem -> /etc/ssl/certs/17672.pem
	I0708 12:42:59.492184    2792 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0708 12:42:59.495802    2792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem --> /etc/ssl/certs/17672.pem (1708 bytes)
	I0708 12:42:59.504060    2792 start.go:296] duration metric: took 44.72475ms for postStartSetup
	I0708 12:42:59.504075    2792 fix.go:56] duration metric: took 21.491585916s for fixHost
	I0708 12:42:59.504112    2792 main.go:141] libmachine: Using SSH client type: native
	I0708 12:42:59.504221    2792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c66920] 0x102c69180 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0708 12:42:59.504226    2792 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0708 12:42:59.555643    2792 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720467779.660039379
	
	I0708 12:42:59.555651    2792 fix.go:216] guest clock: 1720467779.660039379
	I0708 12:42:59.555655    2792 fix.go:229] Guest: 2024-07-08 12:42:59.660039379 -0700 PDT Remote: 2024-07-08 12:42:59.504077 -0700 PDT m=+21.609210709 (delta=155.962379ms)
	I0708 12:42:59.555675    2792 fix.go:200] guest clock delta is within tolerance: 155.962379ms
	I0708 12:42:59.555677    2792 start.go:83] releasing machines lock for "ha-881000", held for 21.543198875s
	I0708 12:42:59.555983    2792 ssh_runner.go:195] Run: cat /version.json
	I0708 12:42:59.555998    2792 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0708 12:42:59.555997    2792 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/id_rsa Username:docker}
	I0708 12:42:59.556014    2792 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/id_rsa Username:docker}
	I0708 12:42:59.630004    2792 ssh_runner.go:195] Run: systemctl --version
	I0708 12:42:59.632528    2792 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0708 12:42:59.634713    2792 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0708 12:42:59.634742    2792 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0708 12:42:59.641382    2792 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0708 12:42:59.641391    2792 start.go:494] detecting cgroup driver to use...
	I0708 12:42:59.641465    2792 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 12:42:59.648306    2792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0708 12:42:59.652336    2792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0708 12:42:59.656198    2792 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0708 12:42:59.656228    2792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0708 12:42:59.660186    2792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0708 12:42:59.664020    2792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0708 12:42:59.668023    2792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0708 12:42:59.672109    2792 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0708 12:42:59.675874    2792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0708 12:42:59.679612    2792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0708 12:42:59.683413    2792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0708 12:42:59.687103    2792 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 12:42:59.690354    2792 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 12:42:59.693546    2792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 12:42:59.793928    2792 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0708 12:42:59.801903    2792 start.go:494] detecting cgroup driver to use...
	I0708 12:42:59.801983    2792 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0708 12:42:59.808101    2792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 12:42:59.813711    2792 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0708 12:42:59.820068    2792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 12:42:59.825566    2792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0708 12:42:59.830939    2792 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0708 12:42:59.863864    2792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0708 12:42:59.869916    2792 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 12:42:59.876286    2792 ssh_runner.go:195] Run: which cri-dockerd
	I0708 12:42:59.877768    2792 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0708 12:42:59.880958    2792 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0708 12:42:59.886783    2792 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0708 12:42:59.960067    2792 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0708 12:43:00.028561    2792 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0708 12:43:00.028631    2792 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0708 12:43:00.034849    2792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 12:43:00.122720    2792 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0708 12:43:02.305708    2792 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.183023708s)
	I0708 12:43:02.305781    2792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0708 12:43:02.311179    2792 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0708 12:43:02.317687    2792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0708 12:43:02.322820    2792 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0708 12:43:02.401504    2792 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0708 12:43:02.464769    2792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 12:43:02.528874    2792 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0708 12:43:02.535741    2792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0708 12:43:02.541590    2792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 12:43:02.625585    2792 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0708 12:43:02.650743    2792 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0708 12:43:02.650828    2792 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0708 12:43:02.654105    2792 start.go:562] Will wait 60s for crictl version
	I0708 12:43:02.654151    2792 ssh_runner.go:195] Run: which crictl
	I0708 12:43:02.655436    2792 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0708 12:43:02.675462    2792 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0708 12:43:02.675525    2792 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0708 12:43:02.685440    2792 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0708 12:43:02.699878    2792 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0708 12:43:02.700008    2792 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0708 12:43:02.701732    2792 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 12:43:02.705787    2792 kubeadm.go:877] updating cluster {Name:ha-881000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 C
lusterName:ha-881000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0708 12:43:02.705834    2792 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0708 12:43:02.705879    2792 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0708 12:43:02.710507    2792 docker.go:685] Got preloaded images: 
	I0708 12:43:02.710516    2792 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.2 wasn't preloaded
	I0708 12:43:02.710553    2792 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0708 12:43:02.713839    2792 ssh_runner.go:195] Run: which lz4
	I0708 12:43:02.715094    2792 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0708 12:43:02.715184    2792 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0708 12:43:02.716549    2792 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0708 12:43:02.716564    2792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (335401736 bytes)
	I0708 12:43:04.005323    2792 docker.go:649] duration metric: took 1.290201209s to copy over tarball
	I0708 12:43:04.005379    2792 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0708 12:43:05.060774    2792 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.055402791s)
	I0708 12:43:05.060797    2792 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0708 12:43:05.075952    2792 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0708 12:43:05.079853    2792 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0708 12:43:05.085627    2792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 12:43:05.155275    2792 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0708 12:43:07.363151    2792 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.207908791s)
	I0708 12:43:07.363264    2792 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0708 12:43:07.369552    2792 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0708 12:43:07.369562    2792 cache_images.go:84] Images are preloaded, skipping loading
	I0708 12:43:07.369567    2792 kubeadm.go:928] updating node { 192.168.105.5 8443 v1.30.2 docker true true} ...
	I0708 12:43:07.369641    2792 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-881000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-881000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0708 12:43:07.369705    2792 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0708 12:43:07.378105    2792 cni.go:84] Creating CNI manager for ""
	I0708 12:43:07.378113    2792 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0708 12:43:07.378118    2792 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0708 12:43:07.378130    2792 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.5 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-881000 NodeName:ha-881000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0708 12:43:07.378203    2792 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-881000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0708 12:43:07.378254    2792 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0708 12:43:07.381873    2792 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 12:43:07.381909    2792 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0708 12:43:07.385107    2792 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0708 12:43:07.390946    2792 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 12:43:07.396623    2792 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0708 12:43:07.402727    2792 ssh_runner.go:195] Run: grep 192.168.105.5	control-plane.minikube.internal$ /etc/hosts
	I0708 12:43:07.403984    2792 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.5	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 12:43:07.408235    2792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 12:43:07.489744    2792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 12:43:07.497794    2792 certs.go:68] Setting up /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000 for IP: 192.168.105.5
	I0708 12:43:07.497805    2792 certs.go:194] generating shared ca certs ...
	I0708 12:43:07.497814    2792 certs.go:226] acquiring lock for ca certs: {Name:mka13b605a6983b2618b91f3a0bdec43c132a4e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:43:07.497997    2792 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.key
	I0708 12:43:07.498047    2792 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/proxy-client-ca.key
	I0708 12:43:07.498057    2792 certs.go:256] generating profile certs ...
	I0708 12:43:07.498089    2792 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/client.key
	I0708 12:43:07.498097    2792 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/client.crt with IP's: []
	I0708 12:43:07.610199    2792 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/client.crt ...
	I0708 12:43:07.610210    2792 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/client.crt: {Name:mk17d6ffdb6e4f5c9c3a6134a2ecb0fbf924f72e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:43:07.610490    2792 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/client.key ...
	I0708 12:43:07.610493    2792 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/client.key: {Name:mkb0b24e1d4b3fead9c039f8e3325a790cd2b327 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:43:07.610624    2792 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.key.174b6ad8
	I0708 12:43:07.610632    2792 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.crt.174b6ad8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.5]
	I0708 12:43:07.817295    2792 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.crt.174b6ad8 ...
	I0708 12:43:07.817301    2792 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.crt.174b6ad8: {Name:mkcff40587e3bcbf1550d8c6105c1ac2a7f41481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:43:07.817491    2792 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.key.174b6ad8 ...
	I0708 12:43:07.817496    2792 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.key.174b6ad8: {Name:mkc5dd491403231f22bb82af593a8317b9d81626 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:43:07.817620    2792 certs.go:381] copying /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.crt.174b6ad8 -> /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.crt
	I0708 12:43:07.817911    2792 certs.go:385] copying /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.key.174b6ad8 -> /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.key
	I0708 12:43:07.818078    2792 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/proxy-client.key
	I0708 12:43:07.818089    2792 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/proxy-client.crt with IP's: []
	I0708 12:43:07.864462    2792 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/proxy-client.crt ...
	I0708 12:43:07.864466    2792 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/proxy-client.crt: {Name:mkc7960df69214b7fc896c3d856e9afae85b0de2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:43:07.864636    2792 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/proxy-client.key ...
	I0708 12:43:07.864640    2792 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/proxy-client.key: {Name:mk51d5b20112d4dd24f6f8c5413a022430f0f839 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:43:07.864777    2792 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0708 12:43:07.864792    2792 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0708 12:43:07.864803    2792 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0708 12:43:07.864817    2792 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0708 12:43:07.864828    2792 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0708 12:43:07.864843    2792 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0708 12:43:07.864853    2792 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0708 12:43:07.864865    2792 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0708 12:43:07.864954    2792 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/1767.pem (1338 bytes)
	W0708 12:43:07.864992    2792 certs.go:480] ignoring /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/1767_empty.pem, impossibly tiny 0 bytes
	I0708 12:43:07.864999    2792 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca-key.pem (1679 bytes)
	I0708 12:43:07.865025    2792 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem (1078 bytes)
	I0708 12:43:07.865048    2792 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem (1123 bytes)
	I0708 12:43:07.865070    2792 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/key.pem (1675 bytes)
	I0708 12:43:07.865119    2792 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem (1708 bytes)
	I0708 12:43:07.865148    2792 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/1767.pem -> /usr/share/ca-certificates/1767.pem
	I0708 12:43:07.865162    2792 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem -> /usr/share/ca-certificates/17672.pem
	I0708 12:43:07.865173    2792 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0708 12:43:07.865490    2792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 12:43:07.875044    2792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0708 12:43:07.883695    2792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 12:43:07.892161    2792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 12:43:07.900426    2792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I0708 12:43:07.908600    2792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0708 12:43:07.916656    2792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 12:43:07.924696    2792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0708 12:43:07.932876    2792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/1767.pem --> /usr/share/ca-certificates/1767.pem (1338 bytes)
	I0708 12:43:07.940800    2792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem --> /usr/share/ca-certificates/17672.pem (1708 bytes)
	I0708 12:43:07.948776    2792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 12:43:07.956788    2792 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 12:43:07.962647    2792 ssh_runner.go:195] Run: openssl version
	I0708 12:43:07.964895    2792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1767.pem && ln -fs /usr/share/ca-certificates/1767.pem /etc/ssl/certs/1767.pem"
	I0708 12:43:07.968700    2792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1767.pem
	I0708 12:43:07.970328    2792 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  8 19:34 /usr/share/ca-certificates/1767.pem
	I0708 12:43:07.970349    2792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1767.pem
	I0708 12:43:07.972403    2792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1767.pem /etc/ssl/certs/51391683.0"
	I0708 12:43:07.976356    2792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17672.pem && ln -fs /usr/share/ca-certificates/17672.pem /etc/ssl/certs/17672.pem"
	I0708 12:43:07.980289    2792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17672.pem
	I0708 12:43:07.981950    2792 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  8 19:34 /usr/share/ca-certificates/17672.pem
	I0708 12:43:07.981968    2792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17672.pem
	I0708 12:43:07.983941    2792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17672.pem /etc/ssl/certs/3ec20f2e.0"
	I0708 12:43:07.987886    2792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 12:43:07.992006    2792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 12:43:07.993803    2792 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 12:43:07.993828    2792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 12:43:07.995840    2792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 12:43:07.999786    2792 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 12:43:08.001290    2792 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0708 12:43:08.001328    2792 kubeadm.go:391] StartCluster: {Name:ha-881000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clus
terName:ha-881000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 12:43:08.001392    2792 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0708 12:43:08.006839    2792 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0708 12:43:08.010550    2792 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 12:43:08.013771    2792 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 12:43:08.017020    2792 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 12:43:08.017026    2792 kubeadm.go:156] found existing configuration files:
	
	I0708 12:43:08.017048    2792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0708 12:43:08.020353    2792 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 12:43:08.020382    2792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 12:43:08.023849    2792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0708 12:43:08.027218    2792 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 12:43:08.027242    2792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 12:43:08.030651    2792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0708 12:43:08.033636    2792 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 12:43:08.033665    2792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 12:43:08.036722    2792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0708 12:43:08.040035    2792 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 12:43:08.040060    2792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 12:43:08.043777    2792 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0708 12:43:08.066791    2792 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0708 12:43:08.066820    2792 kubeadm.go:309] [preflight] Running pre-flight checks
	I0708 12:43:08.111846    2792 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0708 12:43:08.111910    2792 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0708 12:43:08.111956    2792 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0708 12:43:08.189910    2792 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0708 12:43:08.202045    2792 out.go:204]   - Generating certificates and keys ...
	I0708 12:43:08.202077    2792 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0708 12:43:08.202103    2792 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0708 12:43:08.262225    2792 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0708 12:43:08.397239    2792 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0708 12:43:08.443677    2792 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0708 12:43:08.503721    2792 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0708 12:43:08.620608    2792 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0708 12:43:08.620674    2792 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-881000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0708 12:43:08.736569    2792 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0708 12:43:08.736635    2792 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-881000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0708 12:43:08.840660    2792 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0708 12:43:08.981914    2792 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0708 12:43:09.139750    2792 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0708 12:43:09.139786    2792 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0708 12:43:09.233687    2792 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0708 12:43:09.360724    2792 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0708 12:43:09.415015    2792 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0708 12:43:09.657645    2792 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0708 12:43:09.770358    2792 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0708 12:43:09.770630    2792 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0708 12:43:09.771853    2792 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0708 12:43:09.779159    2792 out.go:204]   - Booting up control plane ...
	I0708 12:43:09.779210    2792 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0708 12:43:09.779243    2792 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0708 12:43:09.779274    2792 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0708 12:43:09.780102    2792 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0708 12:43:09.780146    2792 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0708 12:43:09.780178    2792 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0708 12:43:09.880199    2792 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0708 12:43:09.880240    2792 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0708 12:43:10.383195    2792 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 507.67325ms
	I0708 12:43:10.383437    2792 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0708 12:43:13.384095    2792 kubeadm.go:309] [api-check] The API server is healthy after 3.001497085s
	I0708 12:43:13.390128    2792 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0708 12:43:13.394120    2792 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0708 12:43:13.400758    2792 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0708 12:43:13.400870    2792 kubeadm.go:309] [mark-control-plane] Marking the node ha-881000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0708 12:43:13.403884    2792 kubeadm.go:309] [bootstrap-token] Using token: djpe70.b1gw9fb9jlqt64nh
	I0708 12:43:13.412978    2792 out.go:204]   - Configuring RBAC rules ...
	I0708 12:43:13.413033    2792 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0708 12:43:13.413074    2792 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0708 12:43:13.414844    2792 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0708 12:43:13.415731    2792 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0708 12:43:13.416675    2792 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0708 12:43:13.417635    2792 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0708 12:43:13.787461    2792 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0708 12:43:14.193410    2792 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0708 12:43:14.788258    2792 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0708 12:43:14.788628    2792 kubeadm.go:309] 
	I0708 12:43:14.788657    2792 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0708 12:43:14.788668    2792 kubeadm.go:309] 
	I0708 12:43:14.788705    2792 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0708 12:43:14.788710    2792 kubeadm.go:309] 
	I0708 12:43:14.788722    2792 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0708 12:43:14.788764    2792 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0708 12:43:14.788799    2792 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0708 12:43:14.788802    2792 kubeadm.go:309] 
	I0708 12:43:14.788829    2792 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0708 12:43:14.788834    2792 kubeadm.go:309] 
	I0708 12:43:14.788864    2792 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0708 12:43:14.788867    2792 kubeadm.go:309] 
	I0708 12:43:14.788901    2792 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0708 12:43:14.788940    2792 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0708 12:43:14.788973    2792 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0708 12:43:14.788976    2792 kubeadm.go:309] 
	I0708 12:43:14.789023    2792 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0708 12:43:14.789072    2792 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0708 12:43:14.789076    2792 kubeadm.go:309] 
	I0708 12:43:14.789116    2792 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token djpe70.b1gw9fb9jlqt64nh \
	I0708 12:43:14.789181    2792 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:230a71526e00c18db9a0775e630de2fb59560bfeed9e976d05ee095d6c2f986e \
	I0708 12:43:14.789193    2792 kubeadm.go:309] 	--control-plane 
	I0708 12:43:14.789199    2792 kubeadm.go:309] 
	I0708 12:43:14.789235    2792 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0708 12:43:14.789238    2792 kubeadm.go:309] 
	I0708 12:43:14.789284    2792 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token djpe70.b1gw9fb9jlqt64nh \
	I0708 12:43:14.789340    2792 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:230a71526e00c18db9a0775e630de2fb59560bfeed9e976d05ee095d6c2f986e 
	I0708 12:43:14.789409    2792 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0708 12:43:14.789417    2792 cni.go:84] Creating CNI manager for ""
	I0708 12:43:14.789421    2792 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0708 12:43:14.792826    2792 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0708 12:43:14.799892    2792 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0708 12:43:14.801654    2792 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0708 12:43:14.801660    2792 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0708 12:43:14.807319    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0708 12:43:14.942158    2792 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0708 12:43:14.942221    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:14.942234    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-881000 minikube.k8s.io/updated_at=2024_07_08T12_43_14_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad minikube.k8s.io/name=ha-881000 minikube.k8s.io/primary=true
	I0708 12:43:14.992890    2792 ops.go:34] apiserver oom_adj: -16
	I0708 12:43:14.993004    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:15.495039    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:15.995135    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:16.495063    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:16.995051    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:17.495017    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:17.995012    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:18.495060    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:18.993265    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:19.494955    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:19.994930    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:20.495006    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:20.994990    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:21.494952    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:21.994928    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:22.494878    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:22.994904    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:23.494906    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:23.994925    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:24.494949    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:24.994856    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:25.494928    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:25.994867    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:26.494888    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:26.994661    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:27.494800    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:27.994750    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:28.494811    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:28.528657    2792 kubeadm.go:1107] duration metric: took 13.586806166s to wait for elevateKubeSystemPrivileges
	W0708 12:43:28.528681    2792 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0708 12:43:28.528685    2792 kubeadm.go:393] duration metric: took 20.527849042s to StartCluster
	I0708 12:43:28.528695    2792 settings.go:142] acquiring lock: {Name:mka0c397a57d617e1d77508d22cc3adb2edf5927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:43:28.528799    2792 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 12:43:28.529135    2792 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/kubeconfig: {Name:mkd06393ca6fb9ad91b614216d70dbd8a552e45d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:43:28.529561    2792 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 12:43:28.529588    2792 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0708 12:43:28.529631    2792 config.go:182] Loaded profile config "ha-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:43:28.529634    2792 addons.go:69] Setting storage-provisioner=true in profile "ha-881000"
	I0708 12:43:28.529648    2792 addons.go:234] Setting addon storage-provisioner=true in "ha-881000"
	I0708 12:43:28.529652    2792 addons.go:69] Setting default-storageclass=true in profile "ha-881000"
	I0708 12:43:28.529662    2792 host.go:66] Checking if "ha-881000" exists ...
	I0708 12:43:28.529666    2792 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-881000"
	I0708 12:43:28.530389    2792 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 12:43:28.530507    2792 kapi.go:59] client config for ha-881000: &rest.Config{Host:"https://192.168.105.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/client.key", CAFile:"/Users/jenkins/minikube-integration/19195-1270/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103ff74f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0708 12:43:28.530752    2792 cert_rotation.go:137] Starting client certificate rotation controller
	I0708 12:43:28.530793    2792 addons.go:234] Setting addon default-storageclass=true in "ha-881000"
	I0708 12:43:28.530802    2792 host.go:66] Checking if "ha-881000" exists ...
	I0708 12:43:28.532502    2792 out.go:177] * Verifying Kubernetes components...
	I0708 12:43:28.532852    2792 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0708 12:43:28.535674    2792 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0708 12:43:28.535682    2792 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/id_rsa Username:docker}
	I0708 12:43:28.539374    2792 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 12:43:28.543430    2792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 12:43:28.546379    2792 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 12:43:28.546387    2792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0708 12:43:28.546396    2792 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/id_rsa Username:docker}
	I0708 12:43:28.629561    2792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 12:43:28.639428    2792 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 12:43:28.639559    2792 kapi.go:59] client config for ha-881000: &rest.Config{Host:"https://192.168.105.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/client.key", CAFile:"/Users/jenkins/minikube-integration/19195-1270/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103ff74f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0708 12:43:28.639687    2792 node_ready.go:35] waiting up to 6m0s for node "ha-881000" to be "Ready" ...
	I0708 12:43:28.639738    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:43:28.639742    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:28.639746    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:28.639748    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:28.644083    2792 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0708 12:43:28.646478    2792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0708 12:43:28.651071    2792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 12:43:28.707960    2792 round_trippers.go:463] GET https://192.168.105.5:8443/apis/storage.k8s.io/v1/storageclasses
	I0708 12:43:28.707968    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:28.707972    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:28.707975    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:28.709000    2792 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:43:28.709236    2792 round_trippers.go:463] PUT https://192.168.105.5:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0708 12:43:28.709242    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:28.709246    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:28.709248    2792 round_trippers.go:473]     Content-Type: application/json
	I0708 12:43:28.709250    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:28.710374    2792 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:43:28.813079    2792 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0708 12:43:28.820264    2792 addons.go:510] duration metric: took 290.685ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0708 12:43:29.141506    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:43:29.141520    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:29.141524    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:29.141526    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:29.142697    2792 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:43:29.641757    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:43:29.641768    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:29.641772    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:29.641777    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:29.643254    2792 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:43:30.141793    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:43:30.141804    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:30.141816    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:30.141819    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:30.143102    2792 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:43:30.641779    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:43:30.641790    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:30.641794    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:30.641796    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:30.643091    2792 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:43:30.643289    2792 node_ready.go:53] node "ha-881000" has status "Ready":"False"
	I0708 12:43:31.141806    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:43:31.141820    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:31.141824    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:31.141839    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:31.143113    2792 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:43:31.641768    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:43:31.641790    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:31.641795    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:31.641798    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:31.643316    2792 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:43:32.141787    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:43:32.141802    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:32.141806    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:32.141808    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:32.143252    2792 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:43:32.641521    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:43:32.641533    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:32.641537    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:32.641539    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:32.642587    2792 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:43:33.140521    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:43:33.140537    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:33.140541    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:33.140544    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:33.141933    2792 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:43:33.142232    2792 node_ready.go:49] node "ha-881000" has status "Ready":"True"
	I0708 12:43:33.142246    2792 node_ready.go:38] duration metric: took 4.502647542s for node "ha-881000" to be "Ready" ...
	I0708 12:43:33.142251    2792 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 12:43:33.142279    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0708 12:43:33.142284    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:33.142287    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:33.142290    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:33.144365    2792 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 12:43:33.147139    2792 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2646x" in "kube-system" namespace to be "Ready" ...
	I0708 12:43:33.147171    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:43:33.147174    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:33.147178    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:33.147181    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:33.148098    2792 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:43:33.148399    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:43:33.148403    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:33.148407    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:33.148409    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:33.149250    2792 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:43:33.649364    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:43:33.649377    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:33.649382    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:33.649385    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:33.651362    2792 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:43:33.651660    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:43:33.651664    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:33.651667    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:33.651670    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:33.652479    2792 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:43:34.149215    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:43:34.149227    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:34.149232    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:34.149234    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:34.150649    2792 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:43:34.151073    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:43:34.151081    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:34.151084    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:34.151087    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:34.151918    2792 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:43:34.152362    2792 pod_ready.go:92] pod "coredns-7db6d8ff4d-2646x" in "kube-system" namespace has status "Ready":"True"
	I0708 12:43:34.152372    2792 pod_ready.go:81] duration metric: took 1.005250625s for pod "coredns-7db6d8ff4d-2646x" in "kube-system" namespace to be "Ready" ...
	I0708 12:43:34.152378    2792 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rlj9v" in "kube-system" namespace to be "Ready" ...
	I0708 12:43:34.152400    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rlj9v
	I0708 12:43:34.152403    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:34.152407    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:34.152410    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:34.153045    2792 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:43:34.153564    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:43:34.153567    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:34.153571    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:34.153573    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:34.154272    2792 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:43:34.154504    2792 pod_ready.go:92] pod "coredns-7db6d8ff4d-rlj9v" in "kube-system" namespace has status "Ready":"True"
	I0708 12:43:34.154509    2792 pod_ready.go:81] duration metric: took 2.127541ms for pod "coredns-7db6d8ff4d-rlj9v" in "kube-system" namespace to be "Ready" ...
	I0708 12:43:34.154513    2792 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-881000" in "kube-system" namespace to be "Ready" ...
	I0708 12:43:34.154530    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-881000
	I0708 12:43:34.154532    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:34.154536    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:34.154539    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:34.155229    2792 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:43:34.155503    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:43:34.155507    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:34.155510    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:34.155512    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:34.156131    2792 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:43:34.156581    2792 pod_ready.go:92] pod "etcd-ha-881000" in "kube-system" namespace has status "Ready":"True"
	I0708 12:43:34.156587    2792 pod_ready.go:81] duration metric: took 2.070708ms for pod "etcd-ha-881000" in "kube-system" namespace to be "Ready" ...
	I0708 12:43:34.156591    2792 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-881000" in "kube-system" namespace to be "Ready" ...
	I0708 12:43:34.156605    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-881000
	I0708 12:43:34.156608    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:34.156611    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:34.156614    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:34.157250    2792 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:43:34.157511    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:43:34.157515    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:34.157518    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:34.157529    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:34.158271    2792 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:43:34.158530    2792 pod_ready.go:92] pod "kube-apiserver-ha-881000" in "kube-system" namespace has status "Ready":"True"
	I0708 12:43:34.158535    2792 pod_ready.go:81] duration metric: took 1.941791ms for pod "kube-apiserver-ha-881000" in "kube-system" namespace to be "Ready" ...
	I0708 12:43:34.158539    2792 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-881000" in "kube-system" namespace to be "Ready" ...
	I0708 12:43:34.158552    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-881000
	I0708 12:43:34.158554    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:34.158557    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:34.158559    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:34.159233    2792 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:43:34.342494    2792 request.go:629] Waited for 182.939083ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:43:34.342537    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:43:34.342539    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:34.342544    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:34.342548    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:34.343504    2792 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:43:34.343715    2792 pod_ready.go:92] pod "kube-controller-manager-ha-881000" in "kube-system" namespace has status "Ready":"True"
	I0708 12:43:34.343721    2792 pod_ready.go:81] duration metric: took 185.184ms for pod "kube-controller-manager-ha-881000" in "kube-system" namespace to be "Ready" ...
	I0708 12:43:34.343725    2792 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nqzkk" in "kube-system" namespace to be "Ready" ...
	I0708 12:43:34.542488    2792 request.go:629] Waited for 198.731875ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nqzkk
	I0708 12:43:34.542545    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nqzkk
	I0708 12:43:34.542548    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:34.542555    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:34.542557    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:34.543651    2792 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:43:34.742459    2792 request.go:629] Waited for 198.213125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:43:34.742490    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:43:34.742494    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:34.742498    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:34.742501    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:34.743883    2792 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:43:34.744203    2792 pod_ready.go:92] pod "kube-proxy-nqzkk" in "kube-system" namespace has status "Ready":"True"
	I0708 12:43:34.744211    2792 pod_ready.go:81] duration metric: took 400.49075ms for pod "kube-proxy-nqzkk" in "kube-system" namespace to be "Ready" ...
	I0708 12:43:34.744216    2792 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-881000" in "kube-system" namespace to be "Ready" ...
	I0708 12:43:34.942451    2792 request.go:629] Waited for 198.2175ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-881000
	I0708 12:43:34.942478    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-881000
	I0708 12:43:34.942483    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:34.942488    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:34.942492    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:34.943819    2792 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:43:35.142463    2792 request.go:629] Waited for 198.40125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:43:35.142502    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:43:35.142505    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:35.142509    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:35.142511    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:35.143784    2792 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:43:35.144105    2792 pod_ready.go:92] pod "kube-scheduler-ha-881000" in "kube-system" namespace has status "Ready":"True"
	I0708 12:43:35.144112    2792 pod_ready.go:81] duration metric: took 399.902209ms for pod "kube-scheduler-ha-881000" in "kube-system" namespace to be "Ready" ...
	I0708 12:43:35.144116    2792 pod_ready.go:38] duration metric: took 2.001906709s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 12:43:35.144127    2792 api_server.go:52] waiting for apiserver process to appear ...
	I0708 12:43:35.144187    2792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 12:43:35.149888    2792 api_server.go:72] duration metric: took 6.620473375s to wait for apiserver process to appear ...
	I0708 12:43:35.149899    2792 api_server.go:88] waiting for apiserver healthz status ...
	I0708 12:43:35.149907    2792 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I0708 12:43:35.152573    2792 api_server.go:279] https://192.168.105.5:8443/healthz returned 200:
	ok
	I0708 12:43:35.152605    2792 round_trippers.go:463] GET https://192.168.105.5:8443/version
	I0708 12:43:35.152610    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:35.152614    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:35.152616    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:35.153274    2792 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:43:35.153316    2792 api_server.go:141] control plane version: v1.30.2
	I0708 12:43:35.153323    2792 api_server.go:131] duration metric: took 3.420834ms to wait for apiserver health ...
	I0708 12:43:35.153326    2792 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 12:43:35.342448    2792 request.go:629] Waited for 189.1ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0708 12:43:35.342467    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0708 12:43:35.342471    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:35.342475    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:35.342477    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:35.344183    2792 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:43:35.346101    2792 system_pods.go:59] 9 kube-system pods found
	I0708 12:43:35.346115    2792 system_pods.go:61] "coredns-7db6d8ff4d-2646x" [5a1aa968-b181-4318-a7f2-fb0f94617bd5] Running
	I0708 12:43:35.346120    2792 system_pods.go:61] "coredns-7db6d8ff4d-rlj9v" [57423cc1-b13f-45c7-b2df-71621270a61f] Running
	I0708 12:43:35.346122    2792 system_pods.go:61] "etcd-ha-881000" [b905dbae-009a-44f3-87e4-756dfae87ce6] Running
	I0708 12:43:35.346125    2792 system_pods.go:61] "kindnet-mmchf" [2f8fecb7-8906-46c9-9d55-c56254b8b3d7] Running
	I0708 12:43:35.346127    2792 system_pods.go:61] "kube-apiserver-ha-881000" [ea5dbd32-5574-42d6-9efd-3956e499027a] Running
	I0708 12:43:35.346128    2792 system_pods.go:61] "kube-controller-manager-ha-881000" [3f0c772a-e298-47e5-a20d-4201060d8e09] Running
	I0708 12:43:35.346130    2792 system_pods.go:61] "kube-proxy-nqzkk" [0037978f-9b19-49c2-a0fd-a7757effb5e9] Running
	I0708 12:43:35.346131    2792 system_pods.go:61] "kube-scheduler-ha-881000" [03ce3397-c2e8-4b90-a33c-11fb0368a30e] Running
	I0708 12:43:35.346133    2792 system_pods.go:61] "storage-provisioner" [62d01d4e-c78c-499e-9905-7ff510f1edea] Running
	I0708 12:43:35.346136    2792 system_pods.go:74] duration metric: took 192.811125ms to wait for pod list to return data ...
	I0708 12:43:35.346139    2792 default_sa.go:34] waiting for default service account to be created ...
	I0708 12:43:35.542444    2792 request.go:629] Waited for 196.279458ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/default/serviceaccounts
	I0708 12:43:35.542462    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/default/serviceaccounts
	I0708 12:43:35.542466    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:35.542470    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:35.542472    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:35.543806    2792 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:43:35.543904    2792 default_sa.go:45] found service account: "default"
	I0708 12:43:35.543911    2792 default_sa.go:55] duration metric: took 197.7735ms for default service account to be created ...
	I0708 12:43:35.543915    2792 system_pods.go:116] waiting for k8s-apps to be running ...
	I0708 12:43:35.742464    2792 request.go:629] Waited for 198.519833ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0708 12:43:35.742504    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0708 12:43:35.742508    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:35.742518    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:35.742521    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:35.744207    2792 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:43:35.746115    2792 system_pods.go:86] 9 kube-system pods found
	I0708 12:43:35.746124    2792 system_pods.go:89] "coredns-7db6d8ff4d-2646x" [5a1aa968-b181-4318-a7f2-fb0f94617bd5] Running
	I0708 12:43:35.746128    2792 system_pods.go:89] "coredns-7db6d8ff4d-rlj9v" [57423cc1-b13f-45c7-b2df-71621270a61f] Running
	I0708 12:43:35.746130    2792 system_pods.go:89] "etcd-ha-881000" [b905dbae-009a-44f3-87e4-756dfae87ce6] Running
	I0708 12:43:35.746134    2792 system_pods.go:89] "kindnet-mmchf" [2f8fecb7-8906-46c9-9d55-c56254b8b3d7] Running
	I0708 12:43:35.746137    2792 system_pods.go:89] "kube-apiserver-ha-881000" [ea5dbd32-5574-42d6-9efd-3956e499027a] Running
	I0708 12:43:35.746139    2792 system_pods.go:89] "kube-controller-manager-ha-881000" [3f0c772a-e298-47e5-a20d-4201060d8e09] Running
	I0708 12:43:35.746141    2792 system_pods.go:89] "kube-proxy-nqzkk" [0037978f-9b19-49c2-a0fd-a7757effb5e9] Running
	I0708 12:43:35.746143    2792 system_pods.go:89] "kube-scheduler-ha-881000" [03ce3397-c2e8-4b90-a33c-11fb0368a30e] Running
	I0708 12:43:35.746145    2792 system_pods.go:89] "storage-provisioner" [62d01d4e-c78c-499e-9905-7ff510f1edea] Running
	I0708 12:43:35.746149    2792 system_pods.go:126] duration metric: took 202.235167ms to wait for k8s-apps to be running ...
	I0708 12:43:35.746153    2792 system_svc.go:44] waiting for kubelet service to be running ....
	I0708 12:43:35.746245    2792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 12:43:35.752114    2792 system_svc.go:56] duration metric: took 5.959167ms WaitForService to wait for kubelet
	I0708 12:43:35.752126    2792 kubeadm.go:576] duration metric: took 7.222725916s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 12:43:35.752136    2792 node_conditions.go:102] verifying NodePressure condition ...
	I0708 12:43:35.942427    2792 request.go:629] Waited for 190.273208ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes
	I0708 12:43:35.942454    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes
	I0708 12:43:35.942457    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:35.942461    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:35.942463    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:35.943864    2792 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:43:35.944098    2792 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 12:43:35.944106    2792 node_conditions.go:123] node cpu capacity is 2
	I0708 12:43:35.944112    2792 node_conditions.go:105] duration metric: took 191.978375ms to run NodePressure ...
	I0708 12:43:35.944120    2792 start.go:240] waiting for startup goroutines ...
	I0708 12:43:35.944124    2792 start.go:245] waiting for cluster config update ...
	I0708 12:43:35.944130    2792 start.go:254] writing updated cluster config ...
	I0708 12:43:35.944462    2792 ssh_runner.go:195] Run: rm -f paused
	I0708 12:43:35.974714    2792 start.go:600] kubectl: 1.29.2, cluster: 1.30.2 (minor skew: 1)
	I0708 12:43:35.978450    2792 out.go:177] * Done! kubectl is now configured to use "ha-881000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jul 08 19:43:33 ha-881000 dockerd[1292]: time="2024-07-08T19:43:33.491401434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:43:33 ha-881000 dockerd[1292]: time="2024-07-08T19:43:33.491436854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:43:33 ha-881000 dockerd[1292]: time="2024-07-08T19:43:33.495158298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 08 19:43:33 ha-881000 dockerd[1292]: time="2024-07-08T19:43:33.495190139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 08 19:43:33 ha-881000 dockerd[1292]: time="2024-07-08T19:43:33.495198755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:43:33 ha-881000 dockerd[1292]: time="2024-07-08T19:43:33.495315420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:43:33 ha-881000 dockerd[1292]: time="2024-07-08T19:43:33.504865620Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 08 19:43:33 ha-881000 dockerd[1292]: time="2024-07-08T19:43:33.504914109Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 08 19:43:33 ha-881000 dockerd[1292]: time="2024-07-08T19:43:33.505015791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:43:33 ha-881000 dockerd[1292]: time="2024-07-08T19:43:33.505061658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:43:33 ha-881000 cri-dockerd[1188]: time="2024-07-08T19:43:33Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e5df0a87fa9059c3f3ac421a8755302f30956f52fdb892f53e54fcd528b1f104/resolv.conf as [nameserver 192.168.105.1]"
	Jul 08 19:43:33 ha-881000 cri-dockerd[1188]: time="2024-07-08T19:43:33Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1752461159c8042a58a99b7a8c68cb8af89b132f8723cfc3e44f8b585e3368ee/resolv.conf as [nameserver 192.168.105.1]"
	Jul 08 19:43:33 ha-881000 dockerd[1292]: time="2024-07-08T19:43:33.607143638Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 08 19:43:33 ha-881000 dockerd[1292]: time="2024-07-08T19:43:33.607185509Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 08 19:43:33 ha-881000 dockerd[1292]: time="2024-07-08T19:43:33.607195707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:43:33 ha-881000 dockerd[1292]: time="2024-07-08T19:43:33.607247775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:43:33 ha-881000 cri-dockerd[1188]: time="2024-07-08T19:43:33Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e337c3f92f0c7d3deeffa13eed57058733fc29cdb578a1672ee9062838d1100c/resolv.conf as [nameserver 192.168.105.1]"
	Jul 08 19:43:33 ha-881000 dockerd[1292]: time="2024-07-08T19:43:33.650182966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 08 19:43:33 ha-881000 dockerd[1292]: time="2024-07-08T19:43:33.650209646Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 08 19:43:33 ha-881000 dockerd[1292]: time="2024-07-08T19:43:33.650214848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:43:33 ha-881000 dockerd[1292]: time="2024-07-08T19:43:33.650343543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:43:33 ha-881000 dockerd[1292]: time="2024-07-08T19:43:33.653606857Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 08 19:43:33 ha-881000 dockerd[1292]: time="2024-07-08T19:43:33.654756947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 08 19:43:33 ha-881000 dockerd[1292]: time="2024-07-08T19:43:33.654763315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:43:33 ha-881000 dockerd[1292]: time="2024-07-08T19:43:33.654795031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                      CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	57f745d9e2f1c       2437cf7621777                                                                              3 seconds ago       Running             coredns                   0                   e337c3f92f0c7       coredns-7db6d8ff4d-rlj9v
	e5decdf53e42b       2437cf7621777                                                                              3 seconds ago       Running             coredns                   0                   1752461159c80       coredns-7db6d8ff4d-2646x
	0ae23ac6a6991       ba04bb24b9575                                                                              3 seconds ago       Running             storage-provisioner       0                   e5df0a87fa905       storage-provisioner
	8c20b27d40191       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8   6 seconds ago       Running             kindnet-cni               0                   52b9dd42202b7       kindnet-mmchf
	e3b0434a308bd       66dbb96a9149f                                                                              8 seconds ago       Running             kube-proxy                0                   f031f136a08f5       kube-proxy-nqzkk
	ed9f0e91126a2       c7dd04b1bafeb                                                                              26 seconds ago      Running             kube-scheduler            0                   e9a1e4f9ec7d4       kube-scheduler-ha-881000
	5c4705f221f30       014faa467e297                                                                              26 seconds ago      Running             etcd                      0                   59d4e027b0867       etcd-ha-881000
	db173c1aa7e67       84c601f3f72c8                                                                              26 seconds ago      Running             kube-apiserver            0                   3994029f9ba47       kube-apiserver-ha-881000
	cc323cbcdc6df       e1dcc3400d3ea                                                                              26 seconds ago      Running             kube-controller-manager   0                   109f63f7b1864       kube-controller-manager-ha-881000
	
	
	==> coredns [57f745d9e2f1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	
	
	==> coredns [e5decdf53e42] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               ha-881000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-881000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad
	                    minikube.k8s.io/name=ha-881000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_08T12_43_14_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jul 2024 19:43:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-881000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jul 2024 19:43:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jul 2024 19:43:32 +0000   Mon, 08 Jul 2024 19:43:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jul 2024 19:43:32 +0000   Mon, 08 Jul 2024 19:43:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jul 2024 19:43:32 +0000   Mon, 08 Jul 2024 19:43:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jul 2024 19:43:32 +0000   Mon, 08 Jul 2024 19:43:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.5
	  Hostname:    ha-881000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2147456Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2147456Ki
	  pods:               110
	System Info:
	  Machine ID:                 93738340db184b2d89e381b6c5d2ace0
	  System UUID:                93738340db184b2d89e381b6c5d2ace0
	  Boot ID:                    b2c247d6-8c31-44f4-8eed-8a0c638151a3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-2646x             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8s
	  kube-system                 coredns-7db6d8ff4d-rlj9v             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8s
	  kube-system                 etcd-ha-881000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         22s
	  kube-system                 kindnet-mmchf                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8s
	  kube-system                 kube-apiserver-ha-881000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  kube-system                 kube-controller-manager-ha-881000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22s
	  kube-system                 kube-proxy-nqzkk                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kube-scheduler-ha-881000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 8s    kube-proxy       
	  Normal  Starting                 22s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  22s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22s   kubelet          Node ha-881000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s   kubelet          Node ha-881000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s   kubelet          Node ha-881000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9s    node-controller  Node ha-881000 event: Registered Node ha-881000 in Controller
	  Normal  NodeReady                4s    kubelet          Node ha-881000 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.649311] EINJ: EINJ table not found.
	[  +0.550341] systemd-fstab-generator[117]: Ignoring "noauto" option for root device
	[  +0.116810] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000366] platform regulatory.0: Falling back to sysfs fallback for: regulatory.db
	[  +8.126931] systemd-fstab-generator[481]: Ignoring "noauto" option for root device
	[  +0.066385] systemd-fstab-generator[493]: Ignoring "noauto" option for root device
	[  +1.138640] kauditd_printk_skb: 37 callbacks suppressed
	[  +0.387861] systemd-fstab-generator[860]: Ignoring "noauto" option for root device
	[  +0.164215] systemd-fstab-generator[899]: Ignoring "noauto" option for root device
	[  +0.072926] systemd-fstab-generator[911]: Ignoring "noauto" option for root device
	[  +0.092840] systemd-fstab-generator[925]: Ignoring "noauto" option for root device
	[Jul 8 19:43] systemd-fstab-generator[1141]: Ignoring "noauto" option for root device
	[  +0.063674] systemd-fstab-generator[1153]: Ignoring "noauto" option for root device
	[  +0.064289] systemd-fstab-generator[1165]: Ignoring "noauto" option for root device
	[  +0.093799] systemd-fstab-generator[1180]: Ignoring "noauto" option for root device
	[  +2.529288] systemd-fstab-generator[1278]: Ignoring "noauto" option for root device
	[  +0.035577] kauditd_printk_skb: 241 callbacks suppressed
	[  +2.302249] systemd-fstab-generator[1527]: Ignoring "noauto" option for root device
	[  +2.376920] systemd-fstab-generator[1697]: Ignoring "noauto" option for root device
	[  +0.726669] kauditd_printk_skb: 104 callbacks suppressed
	[  +3.283868] systemd-fstab-generator[2108]: Ignoring "noauto" option for root device
	[ +14.467631] kauditd_printk_skb: 52 callbacks suppressed
	[  +0.265302] systemd-fstab-generator[2525]: Ignoring "noauto" option for root device
	[  +4.845835] kauditd_printk_skb: 60 callbacks suppressed
	
	
	==> etcd [5c4705f221f3] <==
	{"level":"info","ts":"2024-07-08T19:43:10.924073Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 switched to configuration voters=(6403572207504089856)"}
	{"level":"info","ts":"2024-07-08T19:43:10.924134Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","added-peer-id":"58de0efec1d86300","added-peer-peer-urls":["https://192.168.105.5:2380"]}
	{"level":"info","ts":"2024-07-08T19:43:10.924281Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-08T19:43:10.924389Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"58de0efec1d86300","initial-advertise-peer-urls":["https://192.168.105.5:2380"],"listen-peer-urls":["https://192.168.105.5:2380"],"advertise-client-urls":["https://192.168.105.5:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.5:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-08T19:43:10.924415Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-08T19:43:10.924501Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2024-07-08T19:43:10.924525Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2024-07-08T19:43:11.306068Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-08T19:43:11.306106Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-08T19:43:11.30612Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgPreVoteResp from 58de0efec1d86300 at term 1"}
	{"level":"info","ts":"2024-07-08T19:43:11.306129Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became candidate at term 2"}
	{"level":"info","ts":"2024-07-08T19:43:11.30621Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgVoteResp from 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2024-07-08T19:43:11.306222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became leader at term 2"}
	{"level":"info","ts":"2024-07-08T19:43:11.306227Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 58de0efec1d86300 elected leader 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2024-07-08T19:43:11.314087Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T19:43:11.319356Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"58de0efec1d86300","local-member-attributes":"{Name:ha-881000 ClientURLs:[https://192.168.105.5:2379]}","request-path":"/0/members/58de0efec1d86300/attributes","cluster-id":"cd5c0afff2184bea","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-08T19:43:11.321333Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T19:43:11.321365Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T19:43:11.321373Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T19:43:11.321377Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-08T19:43:11.321518Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-08T19:43:11.325963Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-08T19:43:11.326646Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.5:2379"}
	{"level":"info","ts":"2024-07-08T19:43:11.342065Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-08T19:43:11.342076Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 19:43:36 up 0 min,  0 users,  load average: 0.75, 0.20, 0.07
	Linux ha-881000 5.10.207 #1 SMP PREEMPT Wed Jul 3 15:00:24 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8c20b27d4019] <==
	I0708 19:43:31.094017       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0708 19:43:31.094078       1 main.go:107] hostIP = 192.168.105.5
	podIP = 192.168.105.5
	I0708 19:43:31.094157       1 main.go:116] setting mtu 1500 for CNI 
	I0708 19:43:31.094166       1 main.go:146] kindnetd IP family: "ipv4"
	I0708 19:43:31.094171       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0708 19:43:31.198442       1 main.go:223] Handling node with IPs: map[192.168.105.5:{}]
	I0708 19:43:31.198484       1 main.go:227] handling current node
	
	
	==> kube-apiserver [db173c1aa7e6] <==
	I0708 19:43:12.108376       1 aggregator.go:165] initial CRD sync complete...
	I0708 19:43:12.108386       1 autoregister_controller.go:141] Starting autoregister controller
	I0708 19:43:12.108398       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0708 19:43:12.108405       1 cache.go:39] Caches are synced for autoregister controller
	I0708 19:43:12.119446       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0708 19:43:12.122598       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0708 19:43:12.122605       1 policy_source.go:224] refreshing policies
	E0708 19:43:12.152783       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0708 19:43:12.201738       1 controller.go:615] quota admission added evaluator for: namespaces
	I0708 19:43:12.308168       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0708 19:43:13.002697       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0708 19:43:13.004733       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0708 19:43:13.004742       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0708 19:43:13.142931       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0708 19:43:13.154540       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0708 19:43:13.204634       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0708 19:43:13.206693       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.105.5]
	I0708 19:43:13.207032       1 controller.go:615] quota admission added evaluator for: endpoints
	I0708 19:43:13.208791       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0708 19:43:14.050517       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0708 19:43:14.293886       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0708 19:43:14.297753       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0708 19:43:14.301478       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0708 19:43:28.052829       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0708 19:43:28.108360       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [cc323cbcdc6d] <==
	I0708 19:43:27.364607       1 shared_informer.go:320] Caches are synced for job
	I0708 19:43:27.379516       1 shared_informer.go:320] Caches are synced for taint
	I0708 19:43:27.379569       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0708 19:43:27.379665       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-881000"
	I0708 19:43:27.379876       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0708 19:43:27.400488       1 shared_informer.go:320] Caches are synced for cronjob
	I0708 19:43:27.402642       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0708 19:43:27.449755       1 shared_informer.go:320] Caches are synced for disruption
	I0708 19:43:27.456110       1 shared_informer.go:320] Caches are synced for resource quota
	I0708 19:43:27.502148       1 shared_informer.go:320] Caches are synced for attach detach
	I0708 19:43:27.506149       1 shared_informer.go:320] Caches are synced for resource quota
	I0708 19:43:27.911596       1 shared_informer.go:320] Caches are synced for garbage collector
	I0708 19:43:27.957884       1 shared_informer.go:320] Caches are synced for garbage collector
	I0708 19:43:27.957934       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0708 19:43:28.425227       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="314.836166ms"
	I0708 19:43:28.435658       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="10.396584ms"
	I0708 19:43:28.435835       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="149.208µs"
	I0708 19:43:32.844754       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.079µs"
	I0708 19:43:32.851504       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="24.888µs"
	I0708 19:43:32.855122       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="19.561µs"
	I0708 19:43:34.205110       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="20.198µs"
	I0708 19:43:34.217813       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="4.734129ms"
	I0708 19:43:34.217858       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.281µs"
	I0708 19:43:34.230679       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="5.799989ms"
	I0708 19:43:34.230874       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="25.029µs"
	
	
	==> kube-proxy [e3b0434a308b] <==
	I0708 19:43:28.503731       1 server_linux.go:69] "Using iptables proxy"
	I0708 19:43:28.508302       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.5"]
	I0708 19:43:28.516101       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0708 19:43:28.516115       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0708 19:43:28.516122       1 server_linux.go:165] "Using iptables Proxier"
	I0708 19:43:28.516705       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0708 19:43:28.516832       1 server.go:872] "Version info" version="v1.30.2"
	I0708 19:43:28.516838       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0708 19:43:28.517447       1 config.go:192] "Starting service config controller"
	I0708 19:43:28.517466       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0708 19:43:28.517525       1 config.go:101] "Starting endpoint slice config controller"
	I0708 19:43:28.517530       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0708 19:43:28.517796       1 config.go:319] "Starting node config controller"
	I0708 19:43:28.518198       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0708 19:43:28.618095       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0708 19:43:28.618123       1 shared_informer.go:320] Caches are synced for service config
	I0708 19:43:28.618242       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [ed9f0e91126a] <==
	W0708 19:43:12.067367       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0708 19:43:12.068934       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0708 19:43:12.068365       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0708 19:43:12.068955       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0708 19:43:12.068385       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0708 19:43:12.068976       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0708 19:43:12.068397       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0708 19:43:12.069003       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0708 19:43:12.068425       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0708 19:43:12.069013       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0708 19:43:12.068441       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0708 19:43:12.069033       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0708 19:43:12.068458       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0708 19:43:12.069050       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0708 19:43:12.068468       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0708 19:43:12.069087       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0708 19:43:12.068628       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0708 19:43:12.069141       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0708 19:43:12.068640       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0708 19:43:12.069171       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0708 19:43:12.978094       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0708 19:43:12.978251       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0708 19:43:12.987481       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0708 19:43:12.987495       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0708 19:43:13.665698       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 08 19:43:27 ha-881000 kubelet[2114]: I0708 19:43:27.273244    2114 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 08 19:43:28 ha-881000 kubelet[2114]: I0708 19:43:28.061446    2114 topology_manager.go:215] "Topology Admit Handler" podUID="0037978f-9b19-49c2-a0fd-a7757effb5e9" podNamespace="kube-system" podName="kube-proxy-nqzkk"
	Jul 08 19:43:28 ha-881000 kubelet[2114]: I0708 19:43:28.062525    2114 topology_manager.go:215] "Topology Admit Handler" podUID="2f8fecb7-8906-46c9-9d55-c56254b8b3d7" podNamespace="kube-system" podName="kindnet-mmchf"
	Jul 08 19:43:28 ha-881000 kubelet[2114]: I0708 19:43:28.214339    2114 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0037978f-9b19-49c2-a0fd-a7757effb5e9-lib-modules\") pod \"kube-proxy-nqzkk\" (UID: \"0037978f-9b19-49c2-a0fd-a7757effb5e9\") " pod="kube-system/kube-proxy-nqzkk"
	Jul 08 19:43:28 ha-881000 kubelet[2114]: I0708 19:43:28.214363    2114 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgw6t\" (UniqueName: \"kubernetes.io/projected/0037978f-9b19-49c2-a0fd-a7757effb5e9-kube-api-access-zgw6t\") pod \"kube-proxy-nqzkk\" (UID: \"0037978f-9b19-49c2-a0fd-a7757effb5e9\") " pod="kube-system/kube-proxy-nqzkk"
	Jul 08 19:43:28 ha-881000 kubelet[2114]: I0708 19:43:28.214374    2114 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0037978f-9b19-49c2-a0fd-a7757effb5e9-kube-proxy\") pod \"kube-proxy-nqzkk\" (UID: \"0037978f-9b19-49c2-a0fd-a7757effb5e9\") " pod="kube-system/kube-proxy-nqzkk"
	Jul 08 19:43:28 ha-881000 kubelet[2114]: I0708 19:43:28.214385    2114 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0037978f-9b19-49c2-a0fd-a7757effb5e9-xtables-lock\") pod \"kube-proxy-nqzkk\" (UID: \"0037978f-9b19-49c2-a0fd-a7757effb5e9\") " pod="kube-system/kube-proxy-nqzkk"
	Jul 08 19:43:28 ha-881000 kubelet[2114]: I0708 19:43:28.214392    2114 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f8fecb7-8906-46c9-9d55-c56254b8b3d7-lib-modules\") pod \"kindnet-mmchf\" (UID: \"2f8fecb7-8906-46c9-9d55-c56254b8b3d7\") " pod="kube-system/kindnet-mmchf"
	Jul 08 19:43:28 ha-881000 kubelet[2114]: I0708 19:43:28.214400    2114 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/2f8fecb7-8906-46c9-9d55-c56254b8b3d7-cni-cfg\") pod \"kindnet-mmchf\" (UID: \"2f8fecb7-8906-46c9-9d55-c56254b8b3d7\") " pod="kube-system/kindnet-mmchf"
	Jul 08 19:43:28 ha-881000 kubelet[2114]: I0708 19:43:28.214407    2114 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f8fecb7-8906-46c9-9d55-c56254b8b3d7-xtables-lock\") pod \"kindnet-mmchf\" (UID: \"2f8fecb7-8906-46c9-9d55-c56254b8b3d7\") " pod="kube-system/kindnet-mmchf"
	Jul 08 19:43:28 ha-881000 kubelet[2114]: I0708 19:43:28.214414    2114 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvkr9\" (UniqueName: \"kubernetes.io/projected/2f8fecb7-8906-46c9-9d55-c56254b8b3d7-kube-api-access-kvkr9\") pod \"kindnet-mmchf\" (UID: \"2f8fecb7-8906-46c9-9d55-c56254b8b3d7\") " pod="kube-system/kindnet-mmchf"
	Jul 08 19:43:31 ha-881000 kubelet[2114]: I0708 19:43:31.195873    2114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nqzkk" podStartSLOduration=3.195847979 podStartE2EDuration="3.195847979s" podCreationTimestamp="2024-07-08 19:43:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-08 19:43:29.193994268 +0000 UTC m=+15.129727133" watchObservedRunningTime="2024-07-08 19:43:31.195847979 +0000 UTC m=+17.131580886"
	Jul 08 19:43:32 ha-881000 kubelet[2114]: I0708 19:43:32.833798    2114 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	Jul 08 19:43:32 ha-881000 kubelet[2114]: I0708 19:43:32.844121    2114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-mmchf" podStartSLOduration=2.390346401 podStartE2EDuration="4.844106498s" podCreationTimestamp="2024-07-08 19:43:28 +0000 UTC" firstStartedPulling="2024-07-08 19:43:28.500754851 +0000 UTC m=+14.436487716" lastFinishedPulling="2024-07-08 19:43:30.954514948 +0000 UTC m=+16.890247813" observedRunningTime="2024-07-08 19:43:31.195951852 +0000 UTC m=+17.131684717" watchObservedRunningTime="2024-07-08 19:43:32.844106498 +0000 UTC m=+18.779839405"
	Jul 08 19:43:32 ha-881000 kubelet[2114]: I0708 19:43:32.844546    2114 topology_manager.go:215] "Topology Admit Handler" podUID="5a1aa968-b181-4318-a7f2-fb0f94617bd5" podNamespace="kube-system" podName="coredns-7db6d8ff4d-2646x"
	Jul 08 19:43:32 ha-881000 kubelet[2114]: I0708 19:43:32.844652    2114 topology_manager.go:215] "Topology Admit Handler" podUID="57423cc1-b13f-45c7-b2df-71621270a61f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-rlj9v"
	Jul 08 19:43:32 ha-881000 kubelet[2114]: I0708 19:43:32.846475    2114 topology_manager.go:215] "Topology Admit Handler" podUID="62d01d4e-c78c-499e-9905-7ff510f1edea" podNamespace="kube-system" podName="storage-provisioner"
	Jul 08 19:43:33 ha-881000 kubelet[2114]: I0708 19:43:33.044973    2114 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/62d01d4e-c78c-499e-9905-7ff510f1edea-tmp\") pod \"storage-provisioner\" (UID: \"62d01d4e-c78c-499e-9905-7ff510f1edea\") " pod="kube-system/storage-provisioner"
	Jul 08 19:43:33 ha-881000 kubelet[2114]: I0708 19:43:33.045056    2114 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a1aa968-b181-4318-a7f2-fb0f94617bd5-config-volume\") pod \"coredns-7db6d8ff4d-2646x\" (UID: \"5a1aa968-b181-4318-a7f2-fb0f94617bd5\") " pod="kube-system/coredns-7db6d8ff4d-2646x"
	Jul 08 19:43:33 ha-881000 kubelet[2114]: I0708 19:43:33.045068    2114 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzlkk\" (UniqueName: \"kubernetes.io/projected/5a1aa968-b181-4318-a7f2-fb0f94617bd5-kube-api-access-tzlkk\") pod \"coredns-7db6d8ff4d-2646x\" (UID: \"5a1aa968-b181-4318-a7f2-fb0f94617bd5\") " pod="kube-system/coredns-7db6d8ff4d-2646x"
	Jul 08 19:43:33 ha-881000 kubelet[2114]: I0708 19:43:33.045078    2114 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/57423cc1-b13f-45c7-b2df-71621270a61f-config-volume\") pod \"coredns-7db6d8ff4d-rlj9v\" (UID: \"57423cc1-b13f-45c7-b2df-71621270a61f\") " pod="kube-system/coredns-7db6d8ff4d-rlj9v"
	Jul 08 19:43:33 ha-881000 kubelet[2114]: I0708 19:43:33.045087    2114 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp5ns\" (UniqueName: \"kubernetes.io/projected/57423cc1-b13f-45c7-b2df-71621270a61f-kube-api-access-qp5ns\") pod \"coredns-7db6d8ff4d-rlj9v\" (UID: \"57423cc1-b13f-45c7-b2df-71621270a61f\") " pod="kube-system/coredns-7db6d8ff4d-rlj9v"
	Jul 08 19:43:33 ha-881000 kubelet[2114]: I0708 19:43:33.045095    2114 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9xs9\" (UniqueName: \"kubernetes.io/projected/62d01d4e-c78c-499e-9905-7ff510f1edea-kube-api-access-c9xs9\") pod \"storage-provisioner\" (UID: \"62d01d4e-c78c-499e-9905-7ff510f1edea\") " pod="kube-system/storage-provisioner"
	Jul 08 19:43:34 ha-881000 kubelet[2114]: I0708 19:43:34.206806    2114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-2646x" podStartSLOduration=6.206793657 podStartE2EDuration="6.206793657s" podCreationTimestamp="2024-07-08 19:43:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-08 19:43:34.206576803 +0000 UTC m=+20.142309710" watchObservedRunningTime="2024-07-08 19:43:34.206793657 +0000 UTC m=+20.142526564"
	Jul 08 19:43:34 ha-881000 kubelet[2114]: I0708 19:43:34.224712    2114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=6.224699294 podStartE2EDuration="6.224699294s" podCreationTimestamp="2024-07-08 19:43:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-08 19:43:34.220522604 +0000 UTC m=+20.156255511" watchObservedRunningTime="2024-07-08 19:43:34.224699294 +0000 UTC m=+20.160432201"
	
	
	==> storage-provisioner [0ae23ac6a699] <==
	I0708 19:43:33.659595       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0708 19:43:33.665926       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0708 19:43:33.666090       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0708 19:43:33.672847       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0708 19:43:33.673094       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-881000_e7528831-25b3-4257-a2ce-dbc5f5c23e47!
	I0708 19:43:33.683818       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3bb7994d-1374-425c-b6a5-ded5a8749b0f", APIVersion:"v1", ResourceVersion:"393", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-881000_e7528831-25b3-4257-a2ce-dbc5f5c23e47 became leader
	I0708 19:43:33.773516       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-881000_e7528831-25b3-4257-a2ce-dbc5f5c23e47!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p ha-881000 -n ha-881000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-881000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (1.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-881000" in json of 'profile list' to have "Degraded" status but have "Running" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-881000\",\"Status\":\"Running\",\"Config\":{\"Name\":\"ha-881000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-881000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"KubernetesVersio
n\":\"v1.30.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":{\"default-storageclass\":true,\"storage-provisioner\":true},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"Sock
etVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 logs -n 25
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-881000 -- apply -f             | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:39 PDT |                     |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- rollout status       | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:39 PDT |                     |
	|         | deployment/busybox                   |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:39 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:39 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:39 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:39 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:39 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:39 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:39 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- exec  --             | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- exec  --             | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- exec  -- nslookup    | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| node    | add -p ha-881000 -v=7                | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-881000 node stop m02 -v=7         | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-881000 node start m02 -v=7        | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-881000 -v=7               | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:41 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| stop    | -p ha-881000 -v=7                    | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:41 PDT | 08 Jul 24 12:42 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| start   | -p ha-881000 --wait=true -v=7        | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:42 PDT | 08 Jul 24 12:43 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-881000                    | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:43 PDT |                     |
	| node    | ha-881000 node delete m03 -v=7       | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:43 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/08 12:42:37
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0708 12:42:37.929795    2792 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:42:37.929956    2792 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:42:37.929961    2792 out.go:304] Setting ErrFile to fd 2...
	I0708 12:42:37.929964    2792 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:42:37.930126    2792 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:42:37.931417    2792 out.go:298] Setting JSON to false
	I0708 12:42:37.950421    2792 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2525,"bootTime":1720465232,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0708 12:42:37.950488    2792 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0708 12:42:37.955594    2792 out.go:177] * [ha-881000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0708 12:42:37.961390    2792 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 12:42:37.961418    2792 notify.go:220] Checking for updates...
	I0708 12:42:37.969375    2792 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 12:42:37.973398    2792 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0708 12:42:37.974740    2792 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 12:42:37.977341    2792 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	I0708 12:42:37.980341    2792 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 12:42:37.983678    2792 config.go:182] Loaded profile config "ha-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:42:37.983736    2792 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 12:42:37.988290    2792 out.go:177] * Using the qemu2 driver based on existing profile
	I0708 12:42:37.995370    2792 start.go:297] selected driver: qemu2
	I0708 12:42:37.995378    2792 start.go:901] validating driver "qemu2" against &{Name:ha-881000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.2 ClusterName:ha-881000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 12:42:37.995437    2792 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 12:42:37.997691    2792 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 12:42:37.997741    2792 cni.go:84] Creating CNI manager for ""
	I0708 12:42:37.997746    2792 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0708 12:42:37.997797    2792 start.go:340] cluster config:
	{Name:ha-881000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-881000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 12:42:38.001327    2792 iso.go:125] acquiring lock: {Name:mk0270d312faa6a295feea241390baaf586d8510 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 12:42:38.008296    2792 out.go:177] * Starting "ha-881000" primary control-plane node in "ha-881000" cluster
	I0708 12:42:38.012364    2792 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0708 12:42:38.012385    2792 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0708 12:42:38.012393    2792 cache.go:56] Caching tarball of preloaded images
	I0708 12:42:38.012464    2792 preload.go:173] Found /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0708 12:42:38.012471    2792 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0708 12:42:38.012532    2792 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/config.json ...
	I0708 12:42:38.012953    2792 start.go:360] acquireMachinesLock for ha-881000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 12:42:38.012989    2792 start.go:364] duration metric: took 29.417µs to acquireMachinesLock for "ha-881000"
	I0708 12:42:38.012997    2792 start.go:96] Skipping create...Using existing machine configuration
	I0708 12:42:38.013004    2792 fix.go:54] fixHost starting: 
	I0708 12:42:38.013127    2792 fix.go:112] recreateIfNeeded on ha-881000: state=Stopped err=<nil>
	W0708 12:42:38.013136    2792 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 12:42:38.020265    2792 out.go:177] * Restarting existing qemu2 VM for "ha-881000" ...
	I0708 12:42:38.024422    2792 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:75:66:b4:8a:80 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/disk.qcow2
	I0708 12:42:38.064421    2792 main.go:141] libmachine: STDOUT: 
	I0708 12:42:38.064451    2792 main.go:141] libmachine: STDERR: 
	I0708 12:42:38.064456    2792 main.go:141] libmachine: Attempt 0
	I0708 12:42:38.064467    2792 main.go:141] libmachine: Searching for de:75:66:b4:8a:80 in /var/db/dhcpd_leases ...
	I0708 12:42:38.064527    2792 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0708 12:42:38.064545    2792 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:de:75:66:b4:8a:80 ID:1,de:75:66:b4:8a:80 Lease:0x668c412b}
	I0708 12:42:38.064549    2792 main.go:141] libmachine: Found match: de:75:66:b4:8a:80
	I0708 12:42:38.064553    2792 main.go:141] libmachine: IP: 192.168.105.5
	I0708 12:42:38.064557    2792 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.5)...
	I0708 12:42:57.605102    2792 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/config.json ...
	I0708 12:42:57.605793    2792 machine.go:94] provisionDockerMachine start ...
	I0708 12:42:57.605982    2792 main.go:141] libmachine: Using SSH client type: native
	I0708 12:42:57.606471    2792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c66920] 0x102c69180 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0708 12:42:57.606485    2792 main.go:141] libmachine: About to run SSH command:
	hostname
	I0708 12:42:57.682410    2792 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0708 12:42:57.682463    2792 buildroot.go:166] provisioning hostname "ha-881000"
	I0708 12:42:57.682564    2792 main.go:141] libmachine: Using SSH client type: native
	I0708 12:42:57.682825    2792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c66920] 0x102c69180 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0708 12:42:57.682837    2792 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-881000 && echo "ha-881000" | sudo tee /etc/hostname
	I0708 12:42:57.754602    2792 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-881000
	
	I0708 12:42:57.754677    2792 main.go:141] libmachine: Using SSH client type: native
	I0708 12:42:57.754847    2792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c66920] 0x102c69180 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0708 12:42:57.754860    2792 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-881000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-881000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-881000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 12:42:57.814080    2792 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 12:42:57.814095    2792 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19195-1270/.minikube CaCertPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19195-1270/.minikube}
	I0708 12:42:57.814110    2792 buildroot.go:174] setting up certificates
	I0708 12:42:57.814119    2792 provision.go:84] configureAuth start
	I0708 12:42:57.814126    2792 provision.go:143] copyHostCerts
	I0708 12:42:57.814148    2792 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19195-1270/.minikube/cert.pem
	I0708 12:42:57.814214    2792 exec_runner.go:144] found /Users/jenkins/minikube-integration/19195-1270/.minikube/cert.pem, removing ...
	I0708 12:42:57.814220    2792 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19195-1270/.minikube/cert.pem
	I0708 12:42:57.814354    2792 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19195-1270/.minikube/cert.pem (1123 bytes)
	I0708 12:42:57.814547    2792 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19195-1270/.minikube/key.pem
	I0708 12:42:57.814576    2792 exec_runner.go:144] found /Users/jenkins/minikube-integration/19195-1270/.minikube/key.pem, removing ...
	I0708 12:42:57.814580    2792 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19195-1270/.minikube/key.pem
	I0708 12:42:57.814683    2792 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19195-1270/.minikube/key.pem (1675 bytes)
	I0708 12:42:57.814819    2792 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.pem
	I0708 12:42:57.814851    2792 exec_runner.go:144] found /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.pem, removing ...
	I0708 12:42:57.814855    2792 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.pem
	I0708 12:42:57.814933    2792 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.pem (1078 bytes)
	I0708 12:42:57.815103    2792 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca-key.pem org=jenkins.ha-881000 san=[127.0.0.1 192.168.105.5 ha-881000 localhost minikube]
	I0708 12:42:57.899167    2792 provision.go:177] copyRemoteCerts
	I0708 12:42:57.899194    2792 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 12:42:57.899201    2792 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/id_rsa Username:docker}
	I0708 12:42:57.927671    2792 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0708 12:42:57.927712    2792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 12:42:57.935956    2792 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0708 12:42:57.936005    2792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0708 12:42:57.943804    2792 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0708 12:42:57.943837    2792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0708 12:42:57.951970    2792 provision.go:87] duration metric: took 137.847333ms to configureAuth
	I0708 12:42:57.951978    2792 buildroot.go:189] setting minikube options for container-runtime
	I0708 12:42:57.952085    2792 config.go:182] Loaded profile config "ha-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:42:57.952113    2792 main.go:141] libmachine: Using SSH client type: native
	I0708 12:42:57.952210    2792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c66920] 0x102c69180 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0708 12:42:57.952214    2792 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0708 12:42:58.005015    2792 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0708 12:42:58.005022    2792 buildroot.go:70] root file system type: tmpfs
	I0708 12:42:58.005079    2792 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0708 12:42:58.005112    2792 main.go:141] libmachine: Using SSH client type: native
	I0708 12:42:58.005198    2792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c66920] 0x102c69180 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0708 12:42:58.005231    2792 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0708 12:42:58.062255    2792 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0708 12:42:58.062306    2792 main.go:141] libmachine: Using SSH client type: native
	I0708 12:42:58.062412    2792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c66920] 0x102c69180 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0708 12:42:58.062420    2792 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0708 12:42:59.459311    2792 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0708 12:42:59.459323    2792 machine.go:97] duration metric: took 1.853564625s to provisionDockerMachine
	I0708 12:42:59.459331    2792 start.go:293] postStartSetup for "ha-881000" (driver="qemu2")
	I0708 12:42:59.459338    2792 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 12:42:59.459407    2792 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 12:42:59.459418    2792 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/id_rsa Username:docker}
	I0708 12:42:59.490481    2792 ssh_runner.go:195] Run: cat /etc/os-release
	I0708 12:42:59.491811    2792 info.go:137] Remote host: Buildroot 2023.02.9
	I0708 12:42:59.491818    2792 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19195-1270/.minikube/addons for local assets ...
	I0708 12:42:59.491918    2792 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19195-1270/.minikube/files for local assets ...
	I0708 12:42:59.492051    2792 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem -> 17672.pem in /etc/ssl/certs
	I0708 12:42:59.492056    2792 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem -> /etc/ssl/certs/17672.pem
	I0708 12:42:59.492184    2792 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0708 12:42:59.495802    2792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem --> /etc/ssl/certs/17672.pem (1708 bytes)
	I0708 12:42:59.504060    2792 start.go:296] duration metric: took 44.72475ms for postStartSetup
	I0708 12:42:59.504075    2792 fix.go:56] duration metric: took 21.491585916s for fixHost
	I0708 12:42:59.504112    2792 main.go:141] libmachine: Using SSH client type: native
	I0708 12:42:59.504221    2792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c66920] 0x102c69180 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0708 12:42:59.504226    2792 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0708 12:42:59.555643    2792 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720467779.660039379
	
	I0708 12:42:59.555651    2792 fix.go:216] guest clock: 1720467779.660039379
	I0708 12:42:59.555655    2792 fix.go:229] Guest: 2024-07-08 12:42:59.660039379 -0700 PDT Remote: 2024-07-08 12:42:59.504077 -0700 PDT m=+21.609210709 (delta=155.962379ms)
	I0708 12:42:59.555675    2792 fix.go:200] guest clock delta is within tolerance: 155.962379ms
	I0708 12:42:59.555677    2792 start.go:83] releasing machines lock for "ha-881000", held for 21.543198875s
	I0708 12:42:59.555983    2792 ssh_runner.go:195] Run: cat /version.json
	I0708 12:42:59.555998    2792 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0708 12:42:59.555997    2792 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/id_rsa Username:docker}
	I0708 12:42:59.556014    2792 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/id_rsa Username:docker}
	I0708 12:42:59.630004    2792 ssh_runner.go:195] Run: systemctl --version
	I0708 12:42:59.632528    2792 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0708 12:42:59.634713    2792 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0708 12:42:59.634742    2792 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0708 12:42:59.641382    2792 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0708 12:42:59.641391    2792 start.go:494] detecting cgroup driver to use...
	I0708 12:42:59.641465    2792 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 12:42:59.648306    2792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0708 12:42:59.652336    2792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0708 12:42:59.656198    2792 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0708 12:42:59.656228    2792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0708 12:42:59.660186    2792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0708 12:42:59.664020    2792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0708 12:42:59.668023    2792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0708 12:42:59.672109    2792 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0708 12:42:59.675874    2792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0708 12:42:59.679612    2792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0708 12:42:59.683413    2792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0708 12:42:59.687103    2792 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 12:42:59.690354    2792 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 12:42:59.693546    2792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 12:42:59.793928    2792 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0708 12:42:59.801903    2792 start.go:494] detecting cgroup driver to use...
	I0708 12:42:59.801983    2792 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0708 12:42:59.808101    2792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 12:42:59.813711    2792 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0708 12:42:59.820068    2792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 12:42:59.825566    2792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0708 12:42:59.830939    2792 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0708 12:42:59.863864    2792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0708 12:42:59.869916    2792 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 12:42:59.876286    2792 ssh_runner.go:195] Run: which cri-dockerd
	I0708 12:42:59.877768    2792 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0708 12:42:59.880958    2792 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0708 12:42:59.886783    2792 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0708 12:42:59.960067    2792 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0708 12:43:00.028561    2792 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0708 12:43:00.028631    2792 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0708 12:43:00.034849    2792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 12:43:00.122720    2792 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0708 12:43:02.305708    2792 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.183023708s)
	I0708 12:43:02.305781    2792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0708 12:43:02.311179    2792 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0708 12:43:02.317687    2792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0708 12:43:02.322820    2792 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0708 12:43:02.401504    2792 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0708 12:43:02.464769    2792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 12:43:02.528874    2792 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0708 12:43:02.535741    2792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0708 12:43:02.541590    2792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 12:43:02.625585    2792 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0708 12:43:02.650743    2792 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0708 12:43:02.650828    2792 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0708 12:43:02.654105    2792 start.go:562] Will wait 60s for crictl version
	I0708 12:43:02.654151    2792 ssh_runner.go:195] Run: which crictl
	I0708 12:43:02.655436    2792 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0708 12:43:02.675462    2792 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0708 12:43:02.675525    2792 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0708 12:43:02.685440    2792 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0708 12:43:02.699878    2792 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0708 12:43:02.700008    2792 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0708 12:43:02.701732    2792 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 12:43:02.705787    2792 kubeadm.go:877] updating cluster {Name:ha-881000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 C
lusterName:ha-881000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0708 12:43:02.705834    2792 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0708 12:43:02.705879    2792 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0708 12:43:02.710507    2792 docker.go:685] Got preloaded images: 
	I0708 12:43:02.710516    2792 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.2 wasn't preloaded
	I0708 12:43:02.710553    2792 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0708 12:43:02.713839    2792 ssh_runner.go:195] Run: which lz4
	I0708 12:43:02.715094    2792 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0708 12:43:02.715184    2792 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0708 12:43:02.716549    2792 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0708 12:43:02.716564    2792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (335401736 bytes)
	I0708 12:43:04.005323    2792 docker.go:649] duration metric: took 1.290201209s to copy over tarball
	I0708 12:43:04.005379    2792 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0708 12:43:05.060774    2792 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.055402791s)
	I0708 12:43:05.060797    2792 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0708 12:43:05.075952    2792 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0708 12:43:05.079853    2792 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0708 12:43:05.085627    2792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 12:43:05.155275    2792 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0708 12:43:07.363151    2792 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.207908791s)
	I0708 12:43:07.363264    2792 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0708 12:43:07.369552    2792 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0708 12:43:07.369562    2792 cache_images.go:84] Images are preloaded, skipping loading
	I0708 12:43:07.369567    2792 kubeadm.go:928] updating node { 192.168.105.5 8443 v1.30.2 docker true true} ...
	I0708 12:43:07.369641    2792 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-881000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-881000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0708 12:43:07.369705    2792 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0708 12:43:07.378105    2792 cni.go:84] Creating CNI manager for ""
	I0708 12:43:07.378113    2792 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0708 12:43:07.378118    2792 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0708 12:43:07.378130    2792 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.5 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-881000 NodeName:ha-881000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0708 12:43:07.378203    2792 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-881000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0708 12:43:07.378254    2792 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0708 12:43:07.381873    2792 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 12:43:07.381909    2792 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0708 12:43:07.385107    2792 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0708 12:43:07.390946    2792 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 12:43:07.396623    2792 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0708 12:43:07.402727    2792 ssh_runner.go:195] Run: grep 192.168.105.5	control-plane.minikube.internal$ /etc/hosts
	I0708 12:43:07.403984    2792 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.5	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 12:43:07.408235    2792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 12:43:07.489744    2792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 12:43:07.497794    2792 certs.go:68] Setting up /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000 for IP: 192.168.105.5
	I0708 12:43:07.497805    2792 certs.go:194] generating shared ca certs ...
	I0708 12:43:07.497814    2792 certs.go:226] acquiring lock for ca certs: {Name:mka13b605a6983b2618b91f3a0bdec43c132a4e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:43:07.497997    2792 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.key
	I0708 12:43:07.498047    2792 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/proxy-client-ca.key
	I0708 12:43:07.498057    2792 certs.go:256] generating profile certs ...
	I0708 12:43:07.498089    2792 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/client.key
	I0708 12:43:07.498097    2792 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/client.crt with IP's: []
	I0708 12:43:07.610199    2792 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/client.crt ...
	I0708 12:43:07.610210    2792 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/client.crt: {Name:mk17d6ffdb6e4f5c9c3a6134a2ecb0fbf924f72e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:43:07.610490    2792 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/client.key ...
	I0708 12:43:07.610493    2792 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/client.key: {Name:mkb0b24e1d4b3fead9c039f8e3325a790cd2b327 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:43:07.610624    2792 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.key.174b6ad8
	I0708 12:43:07.610632    2792 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.crt.174b6ad8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.5]
	I0708 12:43:07.817295    2792 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.crt.174b6ad8 ...
	I0708 12:43:07.817301    2792 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.crt.174b6ad8: {Name:mkcff40587e3bcbf1550d8c6105c1ac2a7f41481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:43:07.817491    2792 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.key.174b6ad8 ...
	I0708 12:43:07.817496    2792 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.key.174b6ad8: {Name:mkc5dd491403231f22bb82af593a8317b9d81626 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:43:07.817620    2792 certs.go:381] copying /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.crt.174b6ad8 -> /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.crt
	I0708 12:43:07.817911    2792 certs.go:385] copying /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.key.174b6ad8 -> /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.key
	I0708 12:43:07.818078    2792 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/proxy-client.key
	I0708 12:43:07.818089    2792 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/proxy-client.crt with IP's: []
	I0708 12:43:07.864462    2792 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/proxy-client.crt ...
	I0708 12:43:07.864466    2792 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/proxy-client.crt: {Name:mkc7960df69214b7fc896c3d856e9afae85b0de2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:43:07.864636    2792 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/proxy-client.key ...
	I0708 12:43:07.864640    2792 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/proxy-client.key: {Name:mk51d5b20112d4dd24f6f8c5413a022430f0f839 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:43:07.864777    2792 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0708 12:43:07.864792    2792 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0708 12:43:07.864803    2792 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0708 12:43:07.864817    2792 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0708 12:43:07.864828    2792 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0708 12:43:07.864843    2792 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0708 12:43:07.864853    2792 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0708 12:43:07.864865    2792 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0708 12:43:07.864954    2792 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/1767.pem (1338 bytes)
	W0708 12:43:07.864992    2792 certs.go:480] ignoring /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/1767_empty.pem, impossibly tiny 0 bytes
	I0708 12:43:07.864999    2792 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca-key.pem (1679 bytes)
	I0708 12:43:07.865025    2792 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem (1078 bytes)
	I0708 12:43:07.865048    2792 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem (1123 bytes)
	I0708 12:43:07.865070    2792 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/key.pem (1675 bytes)
	I0708 12:43:07.865119    2792 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem (1708 bytes)
	I0708 12:43:07.865148    2792 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/1767.pem -> /usr/share/ca-certificates/1767.pem
	I0708 12:43:07.865162    2792 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem -> /usr/share/ca-certificates/17672.pem
	I0708 12:43:07.865173    2792 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0708 12:43:07.865490    2792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 12:43:07.875044    2792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0708 12:43:07.883695    2792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 12:43:07.892161    2792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 12:43:07.900426    2792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I0708 12:43:07.908600    2792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0708 12:43:07.916656    2792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 12:43:07.924696    2792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0708 12:43:07.932876    2792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/1767.pem --> /usr/share/ca-certificates/1767.pem (1338 bytes)
	I0708 12:43:07.940800    2792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem --> /usr/share/ca-certificates/17672.pem (1708 bytes)
	I0708 12:43:07.948776    2792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 12:43:07.956788    2792 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 12:43:07.962647    2792 ssh_runner.go:195] Run: openssl version
	I0708 12:43:07.964895    2792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1767.pem && ln -fs /usr/share/ca-certificates/1767.pem /etc/ssl/certs/1767.pem"
	I0708 12:43:07.968700    2792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1767.pem
	I0708 12:43:07.970328    2792 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  8 19:34 /usr/share/ca-certificates/1767.pem
	I0708 12:43:07.970349    2792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1767.pem
	I0708 12:43:07.972403    2792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1767.pem /etc/ssl/certs/51391683.0"
	I0708 12:43:07.976356    2792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17672.pem && ln -fs /usr/share/ca-certificates/17672.pem /etc/ssl/certs/17672.pem"
	I0708 12:43:07.980289    2792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17672.pem
	I0708 12:43:07.981950    2792 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  8 19:34 /usr/share/ca-certificates/17672.pem
	I0708 12:43:07.981968    2792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17672.pem
	I0708 12:43:07.983941    2792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17672.pem /etc/ssl/certs/3ec20f2e.0"
	I0708 12:43:07.987886    2792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 12:43:07.992006    2792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 12:43:07.993803    2792 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 12:43:07.993828    2792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 12:43:07.995840    2792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 12:43:07.999786    2792 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 12:43:08.001290    2792 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0708 12:43:08.001328    2792 kubeadm.go:391] StartCluster: {Name:ha-881000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clus
terName:ha-881000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 12:43:08.001392    2792 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0708 12:43:08.006839    2792 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0708 12:43:08.010550    2792 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 12:43:08.013771    2792 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 12:43:08.017020    2792 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 12:43:08.017026    2792 kubeadm.go:156] found existing configuration files:
	
	I0708 12:43:08.017048    2792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0708 12:43:08.020353    2792 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 12:43:08.020382    2792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 12:43:08.023849    2792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0708 12:43:08.027218    2792 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 12:43:08.027242    2792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 12:43:08.030651    2792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0708 12:43:08.033636    2792 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 12:43:08.033665    2792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 12:43:08.036722    2792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0708 12:43:08.040035    2792 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 12:43:08.040060    2792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 12:43:08.043777    2792 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0708 12:43:08.066791    2792 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0708 12:43:08.066820    2792 kubeadm.go:309] [preflight] Running pre-flight checks
	I0708 12:43:08.111846    2792 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0708 12:43:08.111910    2792 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0708 12:43:08.111956    2792 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0708 12:43:08.189910    2792 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0708 12:43:08.202045    2792 out.go:204]   - Generating certificates and keys ...
	I0708 12:43:08.202077    2792 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0708 12:43:08.202103    2792 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0708 12:43:08.262225    2792 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0708 12:43:08.397239    2792 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0708 12:43:08.443677    2792 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0708 12:43:08.503721    2792 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0708 12:43:08.620608    2792 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0708 12:43:08.620674    2792 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-881000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0708 12:43:08.736569    2792 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0708 12:43:08.736635    2792 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-881000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0708 12:43:08.840660    2792 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0708 12:43:08.981914    2792 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0708 12:43:09.139750    2792 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0708 12:43:09.139786    2792 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0708 12:43:09.233687    2792 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0708 12:43:09.360724    2792 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0708 12:43:09.415015    2792 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0708 12:43:09.657645    2792 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0708 12:43:09.770358    2792 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0708 12:43:09.770630    2792 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0708 12:43:09.771853    2792 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0708 12:43:09.779159    2792 out.go:204]   - Booting up control plane ...
	I0708 12:43:09.779210    2792 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0708 12:43:09.779243    2792 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0708 12:43:09.779274    2792 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0708 12:43:09.780102    2792 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0708 12:43:09.780146    2792 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0708 12:43:09.780178    2792 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0708 12:43:09.880199    2792 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0708 12:43:09.880240    2792 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0708 12:43:10.383195    2792 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 507.67325ms
	I0708 12:43:10.383437    2792 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0708 12:43:13.384095    2792 kubeadm.go:309] [api-check] The API server is healthy after 3.001497085s
	I0708 12:43:13.390128    2792 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0708 12:43:13.394120    2792 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0708 12:43:13.400758    2792 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0708 12:43:13.400870    2792 kubeadm.go:309] [mark-control-plane] Marking the node ha-881000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0708 12:43:13.403884    2792 kubeadm.go:309] [bootstrap-token] Using token: djpe70.b1gw9fb9jlqt64nh
	I0708 12:43:13.412978    2792 out.go:204]   - Configuring RBAC rules ...
	I0708 12:43:13.413033    2792 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0708 12:43:13.413074    2792 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0708 12:43:13.414844    2792 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0708 12:43:13.415731    2792 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0708 12:43:13.416675    2792 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0708 12:43:13.417635    2792 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0708 12:43:13.787461    2792 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0708 12:43:14.193410    2792 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0708 12:43:14.788258    2792 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0708 12:43:14.788628    2792 kubeadm.go:309] 
	I0708 12:43:14.788657    2792 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0708 12:43:14.788668    2792 kubeadm.go:309] 
	I0708 12:43:14.788705    2792 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0708 12:43:14.788710    2792 kubeadm.go:309] 
	I0708 12:43:14.788722    2792 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0708 12:43:14.788764    2792 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0708 12:43:14.788799    2792 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0708 12:43:14.788802    2792 kubeadm.go:309] 
	I0708 12:43:14.788829    2792 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0708 12:43:14.788834    2792 kubeadm.go:309] 
	I0708 12:43:14.788864    2792 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0708 12:43:14.788867    2792 kubeadm.go:309] 
	I0708 12:43:14.788901    2792 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0708 12:43:14.788940    2792 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0708 12:43:14.788973    2792 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0708 12:43:14.788976    2792 kubeadm.go:309] 
	I0708 12:43:14.789023    2792 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0708 12:43:14.789072    2792 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0708 12:43:14.789076    2792 kubeadm.go:309] 
	I0708 12:43:14.789116    2792 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token djpe70.b1gw9fb9jlqt64nh \
	I0708 12:43:14.789181    2792 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:230a71526e00c18db9a0775e630de2fb59560bfeed9e976d05ee095d6c2f986e \
	I0708 12:43:14.789193    2792 kubeadm.go:309] 	--control-plane 
	I0708 12:43:14.789199    2792 kubeadm.go:309] 
	I0708 12:43:14.789235    2792 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0708 12:43:14.789238    2792 kubeadm.go:309] 
	I0708 12:43:14.789284    2792 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token djpe70.b1gw9fb9jlqt64nh \
	I0708 12:43:14.789340    2792 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:230a71526e00c18db9a0775e630de2fb59560bfeed9e976d05ee095d6c2f986e 
	I0708 12:43:14.789409    2792 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0708 12:43:14.789417    2792 cni.go:84] Creating CNI manager for ""
	I0708 12:43:14.789421    2792 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0708 12:43:14.792826    2792 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0708 12:43:14.799892    2792 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0708 12:43:14.801654    2792 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0708 12:43:14.801660    2792 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0708 12:43:14.807319    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0708 12:43:14.942158    2792 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0708 12:43:14.942221    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:14.942234    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-881000 minikube.k8s.io/updated_at=2024_07_08T12_43_14_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad minikube.k8s.io/name=ha-881000 minikube.k8s.io/primary=true
	I0708 12:43:14.992890    2792 ops.go:34] apiserver oom_adj: -16
	I0708 12:43:14.993004    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:15.495039    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:15.995135    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:16.495063    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:16.995051    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:17.495017    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:17.995012    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:18.495060    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:18.993265    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:19.494955    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:19.994930    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:20.495006    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:20.994990    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:21.494952    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:21.994928    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:22.494878    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:22.994904    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:23.494906    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:23.994925    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:24.494949    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:24.994856    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:25.494928    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:25.994867    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:26.494888    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:26.994661    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:27.494800    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:27.994750    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:28.494811    2792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:43:28.528657    2792 kubeadm.go:1107] duration metric: took 13.586806166s to wait for elevateKubeSystemPrivileges
	W0708 12:43:28.528681    2792 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0708 12:43:28.528685    2792 kubeadm.go:393] duration metric: took 20.527849042s to StartCluster
	I0708 12:43:28.528695    2792 settings.go:142] acquiring lock: {Name:mka0c397a57d617e1d77508d22cc3adb2edf5927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:43:28.528799    2792 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 12:43:28.529135    2792 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/kubeconfig: {Name:mkd06393ca6fb9ad91b614216d70dbd8a552e45d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:43:28.529561    2792 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 12:43:28.529588    2792 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0708 12:43:28.529631    2792 config.go:182] Loaded profile config "ha-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:43:28.529634    2792 addons.go:69] Setting storage-provisioner=true in profile "ha-881000"
	I0708 12:43:28.529648    2792 addons.go:234] Setting addon storage-provisioner=true in "ha-881000"
	I0708 12:43:28.529652    2792 addons.go:69] Setting default-storageclass=true in profile "ha-881000"
	I0708 12:43:28.529662    2792 host.go:66] Checking if "ha-881000" exists ...
	I0708 12:43:28.529666    2792 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-881000"
	I0708 12:43:28.530389    2792 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 12:43:28.530507    2792 kapi.go:59] client config for ha-881000: &rest.Config{Host:"https://192.168.105.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/client.key", CAFile:"/Users/jenkins/minikube-integration/19195-1270/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103ff74f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0708 12:43:28.530752    2792 cert_rotation.go:137] Starting client certificate rotation controller
	I0708 12:43:28.530793    2792 addons.go:234] Setting addon default-storageclass=true in "ha-881000"
	I0708 12:43:28.530802    2792 host.go:66] Checking if "ha-881000" exists ...
	I0708 12:43:28.532502    2792 out.go:177] * Verifying Kubernetes components...
	I0708 12:43:28.532852    2792 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0708 12:43:28.535674    2792 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0708 12:43:28.535682    2792 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/id_rsa Username:docker}
	I0708 12:43:28.539374    2792 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 12:43:28.543430    2792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 12:43:28.546379    2792 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 12:43:28.546387    2792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0708 12:43:28.546396    2792 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/id_rsa Username:docker}
	I0708 12:43:28.629561    2792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 12:43:28.639428    2792 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 12:43:28.639559    2792 kapi.go:59] client config for ha-881000: &rest.Config{Host:"https://192.168.105.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/client.key", CAFile:"/Users/jenkins/minikube-integration/19195-1270/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103ff74f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0708 12:43:28.639687    2792 node_ready.go:35] waiting up to 6m0s for node "ha-881000" to be "Ready" ...
	I0708 12:43:28.639738    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:43:28.639742    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:28.639746    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:28.639748    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:28.644083    2792 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0708 12:43:28.646478    2792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0708 12:43:28.651071    2792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 12:43:28.707960    2792 round_trippers.go:463] GET https://192.168.105.5:8443/apis/storage.k8s.io/v1/storageclasses
	I0708 12:43:28.707968    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:28.707972    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:28.707975    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:28.709000    2792 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:43:28.709236    2792 round_trippers.go:463] PUT https://192.168.105.5:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0708 12:43:28.709242    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:28.709246    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:28.709248    2792 round_trippers.go:473]     Content-Type: application/json
	I0708 12:43:28.709250    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:28.710374    2792 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:43:28.813079    2792 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0708 12:43:28.820264    2792 addons.go:510] duration metric: took 290.685ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0708 12:43:29.141506    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:43:29.141520    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:29.141524    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:29.141526    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:29.142697    2792 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:43:29.641757    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:43:29.641768    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:29.641772    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:29.641777    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:29.643254    2792 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:43:30.141793    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:43:30.141804    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:30.141816    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:30.141819    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:30.143102    2792 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:43:30.641779    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:43:30.641790    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:30.641794    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:30.641796    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:30.643091    2792 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:43:30.643289    2792 node_ready.go:53] node "ha-881000" has status "Ready":"False"
	I0708 12:43:31.141806    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:43:31.141820    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:31.141824    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:31.141839    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:31.143113    2792 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:43:31.641768    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:43:31.641790    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:31.641795    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:31.641798    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:31.643316    2792 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:43:32.141787    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:43:32.141802    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:32.141806    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:32.141808    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:32.143252    2792 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:43:32.641521    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:43:32.641533    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:32.641537    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:32.641539    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:32.642587    2792 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:43:33.140521    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:43:33.140537    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:33.140541    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:33.140544    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:33.141933    2792 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:43:33.142232    2792 node_ready.go:49] node "ha-881000" has status "Ready":"True"
	I0708 12:43:33.142246    2792 node_ready.go:38] duration metric: took 4.502647542s for node "ha-881000" to be "Ready" ...
	I0708 12:43:33.142251    2792 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 12:43:33.142279    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0708 12:43:33.142284    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:33.142287    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:33.142290    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:33.144365    2792 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0708 12:43:33.147139    2792 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2646x" in "kube-system" namespace to be "Ready" ...
	I0708 12:43:33.147171    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:43:33.147174    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:33.147178    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:33.147181    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:33.148098    2792 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:43:33.148399    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:43:33.148403    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:33.148407    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:33.148409    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:33.149250    2792 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:43:33.649364    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:43:33.649377    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:33.649382    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:33.649385    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:33.651362    2792 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:43:33.651660    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:43:33.651664    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:33.651667    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:33.651670    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:33.652479    2792 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:43:34.149215    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:43:34.149227    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:34.149232    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:34.149234    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:34.150649    2792 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:43:34.151073    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:43:34.151081    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:34.151084    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:34.151087    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:34.151918    2792 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:43:34.152362    2792 pod_ready.go:92] pod "coredns-7db6d8ff4d-2646x" in "kube-system" namespace has status "Ready":"True"
	I0708 12:43:34.152372    2792 pod_ready.go:81] duration metric: took 1.005250625s for pod "coredns-7db6d8ff4d-2646x" in "kube-system" namespace to be "Ready" ...
	I0708 12:43:34.152378    2792 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rlj9v" in "kube-system" namespace to be "Ready" ...
	I0708 12:43:34.152400    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rlj9v
	I0708 12:43:34.152403    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:34.152407    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:34.152410    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:34.153045    2792 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:43:34.153564    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:43:34.153567    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:34.153571    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:34.153573    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:34.154272    2792 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:43:34.154504    2792 pod_ready.go:92] pod "coredns-7db6d8ff4d-rlj9v" in "kube-system" namespace has status "Ready":"True"
	I0708 12:43:34.154509    2792 pod_ready.go:81] duration metric: took 2.127541ms for pod "coredns-7db6d8ff4d-rlj9v" in "kube-system" namespace to be "Ready" ...
	I0708 12:43:34.154513    2792 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-881000" in "kube-system" namespace to be "Ready" ...
	I0708 12:43:34.154530    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-881000
	I0708 12:43:34.154532    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:34.154536    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:34.154539    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:34.155229    2792 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:43:34.155503    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:43:34.155507    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:34.155510    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:34.155512    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:34.156131    2792 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:43:34.156581    2792 pod_ready.go:92] pod "etcd-ha-881000" in "kube-system" namespace has status "Ready":"True"
	I0708 12:43:34.156587    2792 pod_ready.go:81] duration metric: took 2.070708ms for pod "etcd-ha-881000" in "kube-system" namespace to be "Ready" ...
	I0708 12:43:34.156591    2792 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-881000" in "kube-system" namespace to be "Ready" ...
	I0708 12:43:34.156605    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-881000
	I0708 12:43:34.156608    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:34.156611    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:34.156614    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:34.157250    2792 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:43:34.157511    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:43:34.157515    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:34.157518    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:34.157529    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:34.158271    2792 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:43:34.158530    2792 pod_ready.go:92] pod "kube-apiserver-ha-881000" in "kube-system" namespace has status "Ready":"True"
	I0708 12:43:34.158535    2792 pod_ready.go:81] duration metric: took 1.941791ms for pod "kube-apiserver-ha-881000" in "kube-system" namespace to be "Ready" ...
	I0708 12:43:34.158539    2792 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-881000" in "kube-system" namespace to be "Ready" ...
	I0708 12:43:34.158552    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-881000
	I0708 12:43:34.158554    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:34.158557    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:34.158559    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:34.159233    2792 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:43:34.342494    2792 request.go:629] Waited for 182.939083ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:43:34.342537    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:43:34.342539    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:34.342544    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:34.342548    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:34.343504    2792 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:43:34.343715    2792 pod_ready.go:92] pod "kube-controller-manager-ha-881000" in "kube-system" namespace has status "Ready":"True"
	I0708 12:43:34.343721    2792 pod_ready.go:81] duration metric: took 185.184ms for pod "kube-controller-manager-ha-881000" in "kube-system" namespace to be "Ready" ...
	I0708 12:43:34.343725    2792 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nqzkk" in "kube-system" namespace to be "Ready" ...
	I0708 12:43:34.542488    2792 request.go:629] Waited for 198.731875ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nqzkk
	I0708 12:43:34.542545    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nqzkk
	I0708 12:43:34.542548    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:34.542555    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:34.542557    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:34.543651    2792 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:43:34.742459    2792 request.go:629] Waited for 198.213125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:43:34.742490    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:43:34.742494    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:34.742498    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:34.742501    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:34.743883    2792 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:43:34.744203    2792 pod_ready.go:92] pod "kube-proxy-nqzkk" in "kube-system" namespace has status "Ready":"True"
	I0708 12:43:34.744211    2792 pod_ready.go:81] duration metric: took 400.49075ms for pod "kube-proxy-nqzkk" in "kube-system" namespace to be "Ready" ...
	I0708 12:43:34.744216    2792 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-881000" in "kube-system" namespace to be "Ready" ...
	I0708 12:43:34.942451    2792 request.go:629] Waited for 198.2175ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-881000
	I0708 12:43:34.942478    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-881000
	I0708 12:43:34.942483    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:34.942488    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:34.942492    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:34.943819    2792 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:43:35.142463    2792 request.go:629] Waited for 198.40125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:43:35.142502    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:43:35.142505    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:35.142509    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:35.142511    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:35.143784    2792 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:43:35.144105    2792 pod_ready.go:92] pod "kube-scheduler-ha-881000" in "kube-system" namespace has status "Ready":"True"
	I0708 12:43:35.144112    2792 pod_ready.go:81] duration metric: took 399.902209ms for pod "kube-scheduler-ha-881000" in "kube-system" namespace to be "Ready" ...
	I0708 12:43:35.144116    2792 pod_ready.go:38] duration metric: took 2.001906709s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 12:43:35.144127    2792 api_server.go:52] waiting for apiserver process to appear ...
	I0708 12:43:35.144187    2792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 12:43:35.149888    2792 api_server.go:72] duration metric: took 6.620473375s to wait for apiserver process to appear ...
	I0708 12:43:35.149899    2792 api_server.go:88] waiting for apiserver healthz status ...
	I0708 12:43:35.149907    2792 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I0708 12:43:35.152573    2792 api_server.go:279] https://192.168.105.5:8443/healthz returned 200:
	ok
	I0708 12:43:35.152605    2792 round_trippers.go:463] GET https://192.168.105.5:8443/version
	I0708 12:43:35.152610    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:35.152614    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:35.152616    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:35.153274    2792 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:43:35.153316    2792 api_server.go:141] control plane version: v1.30.2
	I0708 12:43:35.153323    2792 api_server.go:131] duration metric: took 3.420834ms to wait for apiserver health ...
	I0708 12:43:35.153326    2792 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 12:43:35.342448    2792 request.go:629] Waited for 189.1ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0708 12:43:35.342467    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0708 12:43:35.342471    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:35.342475    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:35.342477    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:35.344183    2792 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:43:35.346101    2792 system_pods.go:59] 9 kube-system pods found
	I0708 12:43:35.346115    2792 system_pods.go:61] "coredns-7db6d8ff4d-2646x" [5a1aa968-b181-4318-a7f2-fb0f94617bd5] Running
	I0708 12:43:35.346120    2792 system_pods.go:61] "coredns-7db6d8ff4d-rlj9v" [57423cc1-b13f-45c7-b2df-71621270a61f] Running
	I0708 12:43:35.346122    2792 system_pods.go:61] "etcd-ha-881000" [b905dbae-009a-44f3-87e4-756dfae87ce6] Running
	I0708 12:43:35.346125    2792 system_pods.go:61] "kindnet-mmchf" [2f8fecb7-8906-46c9-9d55-c56254b8b3d7] Running
	I0708 12:43:35.346127    2792 system_pods.go:61] "kube-apiserver-ha-881000" [ea5dbd32-5574-42d6-9efd-3956e499027a] Running
	I0708 12:43:35.346128    2792 system_pods.go:61] "kube-controller-manager-ha-881000" [3f0c772a-e298-47e5-a20d-4201060d8e09] Running
	I0708 12:43:35.346130    2792 system_pods.go:61] "kube-proxy-nqzkk" [0037978f-9b19-49c2-a0fd-a7757effb5e9] Running
	I0708 12:43:35.346131    2792 system_pods.go:61] "kube-scheduler-ha-881000" [03ce3397-c2e8-4b90-a33c-11fb0368a30e] Running
	I0708 12:43:35.346133    2792 system_pods.go:61] "storage-provisioner" [62d01d4e-c78c-499e-9905-7ff510f1edea] Running
	I0708 12:43:35.346136    2792 system_pods.go:74] duration metric: took 192.811125ms to wait for pod list to return data ...
	I0708 12:43:35.346139    2792 default_sa.go:34] waiting for default service account to be created ...
	I0708 12:43:35.542444    2792 request.go:629] Waited for 196.279458ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/default/serviceaccounts
	I0708 12:43:35.542462    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/default/serviceaccounts
	I0708 12:43:35.542466    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:35.542470    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:35.542472    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:35.543806    2792 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:43:35.543904    2792 default_sa.go:45] found service account: "default"
	I0708 12:43:35.543911    2792 default_sa.go:55] duration metric: took 197.7735ms for default service account to be created ...
	I0708 12:43:35.543915    2792 system_pods.go:116] waiting for k8s-apps to be running ...
	I0708 12:43:35.742464    2792 request.go:629] Waited for 198.519833ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0708 12:43:35.742504    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0708 12:43:35.742508    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:35.742518    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:35.742521    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:35.744207    2792 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:43:35.746115    2792 system_pods.go:86] 9 kube-system pods found
	I0708 12:43:35.746124    2792 system_pods.go:89] "coredns-7db6d8ff4d-2646x" [5a1aa968-b181-4318-a7f2-fb0f94617bd5] Running
	I0708 12:43:35.746128    2792 system_pods.go:89] "coredns-7db6d8ff4d-rlj9v" [57423cc1-b13f-45c7-b2df-71621270a61f] Running
	I0708 12:43:35.746130    2792 system_pods.go:89] "etcd-ha-881000" [b905dbae-009a-44f3-87e4-756dfae87ce6] Running
	I0708 12:43:35.746134    2792 system_pods.go:89] "kindnet-mmchf" [2f8fecb7-8906-46c9-9d55-c56254b8b3d7] Running
	I0708 12:43:35.746137    2792 system_pods.go:89] "kube-apiserver-ha-881000" [ea5dbd32-5574-42d6-9efd-3956e499027a] Running
	I0708 12:43:35.746139    2792 system_pods.go:89] "kube-controller-manager-ha-881000" [3f0c772a-e298-47e5-a20d-4201060d8e09] Running
	I0708 12:43:35.746141    2792 system_pods.go:89] "kube-proxy-nqzkk" [0037978f-9b19-49c2-a0fd-a7757effb5e9] Running
	I0708 12:43:35.746143    2792 system_pods.go:89] "kube-scheduler-ha-881000" [03ce3397-c2e8-4b90-a33c-11fb0368a30e] Running
	I0708 12:43:35.746145    2792 system_pods.go:89] "storage-provisioner" [62d01d4e-c78c-499e-9905-7ff510f1edea] Running
	I0708 12:43:35.746149    2792 system_pods.go:126] duration metric: took 202.235167ms to wait for k8s-apps to be running ...
	I0708 12:43:35.746153    2792 system_svc.go:44] waiting for kubelet service to be running ....
	I0708 12:43:35.746245    2792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 12:43:35.752114    2792 system_svc.go:56] duration metric: took 5.959167ms WaitForService to wait for kubelet
	I0708 12:43:35.752126    2792 kubeadm.go:576] duration metric: took 7.222725916s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 12:43:35.752136    2792 node_conditions.go:102] verifying NodePressure condition ...
	I0708 12:43:35.942427    2792 request.go:629] Waited for 190.273208ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes
	I0708 12:43:35.942454    2792 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes
	I0708 12:43:35.942457    2792 round_trippers.go:469] Request Headers:
	I0708 12:43:35.942461    2792 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:43:35.942463    2792 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:43:35.943864    2792 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:43:35.944098    2792 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 12:43:35.944106    2792 node_conditions.go:123] node cpu capacity is 2
	I0708 12:43:35.944112    2792 node_conditions.go:105] duration metric: took 191.978375ms to run NodePressure ...
	I0708 12:43:35.944120    2792 start.go:240] waiting for startup goroutines ...
	I0708 12:43:35.944124    2792 start.go:245] waiting for cluster config update ...
	I0708 12:43:35.944130    2792 start.go:254] writing updated cluster config ...
	I0708 12:43:35.944462    2792 ssh_runner.go:195] Run: rm -f paused
	I0708 12:43:35.974714    2792 start.go:600] kubectl: 1.29.2, cluster: 1.30.2 (minor skew: 1)
	I0708 12:43:35.978450    2792 out.go:177] * Done! kubectl is now configured to use "ha-881000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jul 08 19:43:33 ha-881000 dockerd[1292]: time="2024-07-08T19:43:33.491401434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:43:33 ha-881000 dockerd[1292]: time="2024-07-08T19:43:33.491436854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:43:33 ha-881000 dockerd[1292]: time="2024-07-08T19:43:33.495158298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 08 19:43:33 ha-881000 dockerd[1292]: time="2024-07-08T19:43:33.495190139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 08 19:43:33 ha-881000 dockerd[1292]: time="2024-07-08T19:43:33.495198755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:43:33 ha-881000 dockerd[1292]: time="2024-07-08T19:43:33.495315420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:43:33 ha-881000 dockerd[1292]: time="2024-07-08T19:43:33.504865620Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 08 19:43:33 ha-881000 dockerd[1292]: time="2024-07-08T19:43:33.504914109Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 08 19:43:33 ha-881000 dockerd[1292]: time="2024-07-08T19:43:33.505015791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:43:33 ha-881000 dockerd[1292]: time="2024-07-08T19:43:33.505061658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:43:33 ha-881000 cri-dockerd[1188]: time="2024-07-08T19:43:33Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e5df0a87fa9059c3f3ac421a8755302f30956f52fdb892f53e54fcd528b1f104/resolv.conf as [nameserver 192.168.105.1]"
	Jul 08 19:43:33 ha-881000 cri-dockerd[1188]: time="2024-07-08T19:43:33Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1752461159c8042a58a99b7a8c68cb8af89b132f8723cfc3e44f8b585e3368ee/resolv.conf as [nameserver 192.168.105.1]"
	Jul 08 19:43:33 ha-881000 dockerd[1292]: time="2024-07-08T19:43:33.607143638Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 08 19:43:33 ha-881000 dockerd[1292]: time="2024-07-08T19:43:33.607185509Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 08 19:43:33 ha-881000 dockerd[1292]: time="2024-07-08T19:43:33.607195707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:43:33 ha-881000 dockerd[1292]: time="2024-07-08T19:43:33.607247775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:43:33 ha-881000 cri-dockerd[1188]: time="2024-07-08T19:43:33Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e337c3f92f0c7d3deeffa13eed57058733fc29cdb578a1672ee9062838d1100c/resolv.conf as [nameserver 192.168.105.1]"
	Jul 08 19:43:33 ha-881000 dockerd[1292]: time="2024-07-08T19:43:33.650182966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 08 19:43:33 ha-881000 dockerd[1292]: time="2024-07-08T19:43:33.650209646Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 08 19:43:33 ha-881000 dockerd[1292]: time="2024-07-08T19:43:33.650214848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:43:33 ha-881000 dockerd[1292]: time="2024-07-08T19:43:33.650343543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:43:33 ha-881000 dockerd[1292]: time="2024-07-08T19:43:33.653606857Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 08 19:43:33 ha-881000 dockerd[1292]: time="2024-07-08T19:43:33.654756947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 08 19:43:33 ha-881000 dockerd[1292]: time="2024-07-08T19:43:33.654763315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:43:33 ha-881000 dockerd[1292]: time="2024-07-08T19:43:33.654795031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                      CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	57f745d9e2f1c       2437cf7621777                                                                              4 seconds ago       Running             coredns                   0                   e337c3f92f0c7       coredns-7db6d8ff4d-rlj9v
	e5decdf53e42b       2437cf7621777                                                                              4 seconds ago       Running             coredns                   0                   1752461159c80       coredns-7db6d8ff4d-2646x
	0ae23ac6a6991       ba04bb24b9575                                                                              4 seconds ago       Running             storage-provisioner       0                   e5df0a87fa905       storage-provisioner
	8c20b27d40191       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8   7 seconds ago       Running             kindnet-cni               0                   52b9dd42202b7       kindnet-mmchf
	e3b0434a308bd       66dbb96a9149f                                                                              9 seconds ago       Running             kube-proxy                0                   f031f136a08f5       kube-proxy-nqzkk
	ed9f0e91126a2       c7dd04b1bafeb                                                                              27 seconds ago      Running             kube-scheduler            0                   e9a1e4f9ec7d4       kube-scheduler-ha-881000
	5c4705f221f30       014faa467e297                                                                              27 seconds ago      Running             etcd                      0                   59d4e027b0867       etcd-ha-881000
	db173c1aa7e67       84c601f3f72c8                                                                              27 seconds ago      Running             kube-apiserver            0                   3994029f9ba47       kube-apiserver-ha-881000
	cc323cbcdc6df       e1dcc3400d3ea                                                                              27 seconds ago      Running             kube-controller-manager   0                   109f63f7b1864       kube-controller-manager-ha-881000
	
	
	==> coredns [57f745d9e2f1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	
	
	==> coredns [e5decdf53e42] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               ha-881000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-881000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad
	                    minikube.k8s.io/name=ha-881000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_08T12_43_14_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jul 2024 19:43:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-881000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jul 2024 19:43:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jul 2024 19:43:32 +0000   Mon, 08 Jul 2024 19:43:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jul 2024 19:43:32 +0000   Mon, 08 Jul 2024 19:43:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jul 2024 19:43:32 +0000   Mon, 08 Jul 2024 19:43:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jul 2024 19:43:32 +0000   Mon, 08 Jul 2024 19:43:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.5
	  Hostname:    ha-881000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2147456Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2147456Ki
	  pods:               110
	System Info:
	  Machine ID:                 93738340db184b2d89e381b6c5d2ace0
	  System UUID:                93738340db184b2d89e381b6c5d2ace0
	  Boot ID:                    b2c247d6-8c31-44f4-8eed-8a0c638151a3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-2646x             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9s
	  kube-system                 coredns-7db6d8ff4d-rlj9v             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9s
	  kube-system                 etcd-ha-881000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         23s
	  kube-system                 kindnet-mmchf                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9s
	  kube-system                 kube-apiserver-ha-881000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24s
	  kube-system                 kube-controller-manager-ha-881000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  kube-system                 kube-proxy-nqzkk                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 kube-scheduler-ha-881000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 9s    kube-proxy       
	  Normal  Starting                 23s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  23s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  23s   kubelet          Node ha-881000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s   kubelet          Node ha-881000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s   kubelet          Node ha-881000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10s   node-controller  Node ha-881000 event: Registered Node ha-881000 in Controller
	  Normal  NodeReady                5s    kubelet          Node ha-881000 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.649311] EINJ: EINJ table not found.
	[  +0.550341] systemd-fstab-generator[117]: Ignoring "noauto" option for root device
	[  +0.116810] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000366] platform regulatory.0: Falling back to sysfs fallback for: regulatory.db
	[  +8.126931] systemd-fstab-generator[481]: Ignoring "noauto" option for root device
	[  +0.066385] systemd-fstab-generator[493]: Ignoring "noauto" option for root device
	[  +1.138640] kauditd_printk_skb: 37 callbacks suppressed
	[  +0.387861] systemd-fstab-generator[860]: Ignoring "noauto" option for root device
	[  +0.164215] systemd-fstab-generator[899]: Ignoring "noauto" option for root device
	[  +0.072926] systemd-fstab-generator[911]: Ignoring "noauto" option for root device
	[  +0.092840] systemd-fstab-generator[925]: Ignoring "noauto" option for root device
	[Jul 8 19:43] systemd-fstab-generator[1141]: Ignoring "noauto" option for root device
	[  +0.063674] systemd-fstab-generator[1153]: Ignoring "noauto" option for root device
	[  +0.064289] systemd-fstab-generator[1165]: Ignoring "noauto" option for root device
	[  +0.093799] systemd-fstab-generator[1180]: Ignoring "noauto" option for root device
	[  +2.529288] systemd-fstab-generator[1278]: Ignoring "noauto" option for root device
	[  +0.035577] kauditd_printk_skb: 241 callbacks suppressed
	[  +2.302249] systemd-fstab-generator[1527]: Ignoring "noauto" option for root device
	[  +2.376920] systemd-fstab-generator[1697]: Ignoring "noauto" option for root device
	[  +0.726669] kauditd_printk_skb: 104 callbacks suppressed
	[  +3.283868] systemd-fstab-generator[2108]: Ignoring "noauto" option for root device
	[ +14.467631] kauditd_printk_skb: 52 callbacks suppressed
	[  +0.265302] systemd-fstab-generator[2525]: Ignoring "noauto" option for root device
	[  +4.845835] kauditd_printk_skb: 60 callbacks suppressed
	
	
	==> etcd [5c4705f221f3] <==
	{"level":"info","ts":"2024-07-08T19:43:10.924073Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 switched to configuration voters=(6403572207504089856)"}
	{"level":"info","ts":"2024-07-08T19:43:10.924134Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","added-peer-id":"58de0efec1d86300","added-peer-peer-urls":["https://192.168.105.5:2380"]}
	{"level":"info","ts":"2024-07-08T19:43:10.924281Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-08T19:43:10.924389Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"58de0efec1d86300","initial-advertise-peer-urls":["https://192.168.105.5:2380"],"listen-peer-urls":["https://192.168.105.5:2380"],"advertise-client-urls":["https://192.168.105.5:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.5:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-08T19:43:10.924415Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-08T19:43:10.924501Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2024-07-08T19:43:10.924525Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2024-07-08T19:43:11.306068Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-08T19:43:11.306106Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-08T19:43:11.30612Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgPreVoteResp from 58de0efec1d86300 at term 1"}
	{"level":"info","ts":"2024-07-08T19:43:11.306129Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became candidate at term 2"}
	{"level":"info","ts":"2024-07-08T19:43:11.30621Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgVoteResp from 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2024-07-08T19:43:11.306222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became leader at term 2"}
	{"level":"info","ts":"2024-07-08T19:43:11.306227Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 58de0efec1d86300 elected leader 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2024-07-08T19:43:11.314087Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T19:43:11.319356Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"58de0efec1d86300","local-member-attributes":"{Name:ha-881000 ClientURLs:[https://192.168.105.5:2379]}","request-path":"/0/members/58de0efec1d86300/attributes","cluster-id":"cd5c0afff2184bea","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-08T19:43:11.321333Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T19:43:11.321365Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T19:43:11.321373Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T19:43:11.321377Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-08T19:43:11.321518Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-08T19:43:11.325963Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-08T19:43:11.326646Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.5:2379"}
	{"level":"info","ts":"2024-07-08T19:43:11.342065Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-08T19:43:11.342076Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 19:43:37 up 0 min,  0 users,  load average: 0.75, 0.20, 0.07
	Linux ha-881000 5.10.207 #1 SMP PREEMPT Wed Jul 3 15:00:24 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8c20b27d4019] <==
	I0708 19:43:31.094017       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0708 19:43:31.094078       1 main.go:107] hostIP = 192.168.105.5
	podIP = 192.168.105.5
	I0708 19:43:31.094157       1 main.go:116] setting mtu 1500 for CNI 
	I0708 19:43:31.094166       1 main.go:146] kindnetd IP family: "ipv4"
	I0708 19:43:31.094171       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0708 19:43:31.198442       1 main.go:223] Handling node with IPs: map[192.168.105.5:{}]
	I0708 19:43:31.198484       1 main.go:227] handling current node
	
	
	==> kube-apiserver [db173c1aa7e6] <==
	I0708 19:43:12.108376       1 aggregator.go:165] initial CRD sync complete...
	I0708 19:43:12.108386       1 autoregister_controller.go:141] Starting autoregister controller
	I0708 19:43:12.108398       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0708 19:43:12.108405       1 cache.go:39] Caches are synced for autoregister controller
	I0708 19:43:12.119446       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0708 19:43:12.122598       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0708 19:43:12.122605       1 policy_source.go:224] refreshing policies
	E0708 19:43:12.152783       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0708 19:43:12.201738       1 controller.go:615] quota admission added evaluator for: namespaces
	I0708 19:43:12.308168       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0708 19:43:13.002697       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0708 19:43:13.004733       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0708 19:43:13.004742       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0708 19:43:13.142931       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0708 19:43:13.154540       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0708 19:43:13.204634       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0708 19:43:13.206693       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.105.5]
	I0708 19:43:13.207032       1 controller.go:615] quota admission added evaluator for: endpoints
	I0708 19:43:13.208791       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0708 19:43:14.050517       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0708 19:43:14.293886       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0708 19:43:14.297753       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0708 19:43:14.301478       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0708 19:43:28.052829       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0708 19:43:28.108360       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [cc323cbcdc6d] <==
	I0708 19:43:27.379516       1 shared_informer.go:320] Caches are synced for taint
	I0708 19:43:27.379569       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0708 19:43:27.379665       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-881000"
	I0708 19:43:27.379876       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0708 19:43:27.400488       1 shared_informer.go:320] Caches are synced for cronjob
	I0708 19:43:27.402642       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0708 19:43:27.449755       1 shared_informer.go:320] Caches are synced for disruption
	I0708 19:43:27.456110       1 shared_informer.go:320] Caches are synced for resource quota
	I0708 19:43:27.502148       1 shared_informer.go:320] Caches are synced for attach detach
	I0708 19:43:27.506149       1 shared_informer.go:320] Caches are synced for resource quota
	I0708 19:43:27.911596       1 shared_informer.go:320] Caches are synced for garbage collector
	I0708 19:43:27.957884       1 shared_informer.go:320] Caches are synced for garbage collector
	I0708 19:43:27.957934       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0708 19:43:28.425227       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="314.836166ms"
	I0708 19:43:28.435658       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="10.396584ms"
	I0708 19:43:28.435835       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="149.208µs"
	I0708 19:43:32.844754       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.079µs"
	I0708 19:43:32.851504       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="24.888µs"
	I0708 19:43:32.855122       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="19.561µs"
	I0708 19:43:34.205110       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="20.198µs"
	I0708 19:43:34.217813       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="4.734129ms"
	I0708 19:43:34.217858       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.281µs"
	I0708 19:43:34.230679       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="5.799989ms"
	I0708 19:43:34.230874       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="25.029µs"
	I0708 19:43:37.381649       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [e3b0434a308b] <==
	I0708 19:43:28.503731       1 server_linux.go:69] "Using iptables proxy"
	I0708 19:43:28.508302       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.5"]
	I0708 19:43:28.516101       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0708 19:43:28.516115       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0708 19:43:28.516122       1 server_linux.go:165] "Using iptables Proxier"
	I0708 19:43:28.516705       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0708 19:43:28.516832       1 server.go:872] "Version info" version="v1.30.2"
	I0708 19:43:28.516838       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0708 19:43:28.517447       1 config.go:192] "Starting service config controller"
	I0708 19:43:28.517466       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0708 19:43:28.517525       1 config.go:101] "Starting endpoint slice config controller"
	I0708 19:43:28.517530       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0708 19:43:28.517796       1 config.go:319] "Starting node config controller"
	I0708 19:43:28.518198       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0708 19:43:28.618095       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0708 19:43:28.618123       1 shared_informer.go:320] Caches are synced for service config
	I0708 19:43:28.618242       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [ed9f0e91126a] <==
	W0708 19:43:12.067367       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0708 19:43:12.068934       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0708 19:43:12.068365       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0708 19:43:12.068955       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0708 19:43:12.068385       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0708 19:43:12.068976       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0708 19:43:12.068397       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0708 19:43:12.069003       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0708 19:43:12.068425       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0708 19:43:12.069013       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0708 19:43:12.068441       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0708 19:43:12.069033       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0708 19:43:12.068458       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0708 19:43:12.069050       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0708 19:43:12.068468       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0708 19:43:12.069087       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0708 19:43:12.068628       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0708 19:43:12.069141       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0708 19:43:12.068640       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0708 19:43:12.069171       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0708 19:43:12.978094       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0708 19:43:12.978251       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0708 19:43:12.987481       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0708 19:43:12.987495       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0708 19:43:13.665698       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 08 19:43:27 ha-881000 kubelet[2114]: I0708 19:43:27.273244    2114 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 08 19:43:28 ha-881000 kubelet[2114]: I0708 19:43:28.061446    2114 topology_manager.go:215] "Topology Admit Handler" podUID="0037978f-9b19-49c2-a0fd-a7757effb5e9" podNamespace="kube-system" podName="kube-proxy-nqzkk"
	Jul 08 19:43:28 ha-881000 kubelet[2114]: I0708 19:43:28.062525    2114 topology_manager.go:215] "Topology Admit Handler" podUID="2f8fecb7-8906-46c9-9d55-c56254b8b3d7" podNamespace="kube-system" podName="kindnet-mmchf"
	Jul 08 19:43:28 ha-881000 kubelet[2114]: I0708 19:43:28.214339    2114 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0037978f-9b19-49c2-a0fd-a7757effb5e9-lib-modules\") pod \"kube-proxy-nqzkk\" (UID: \"0037978f-9b19-49c2-a0fd-a7757effb5e9\") " pod="kube-system/kube-proxy-nqzkk"
	Jul 08 19:43:28 ha-881000 kubelet[2114]: I0708 19:43:28.214363    2114 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgw6t\" (UniqueName: \"kubernetes.io/projected/0037978f-9b19-49c2-a0fd-a7757effb5e9-kube-api-access-zgw6t\") pod \"kube-proxy-nqzkk\" (UID: \"0037978f-9b19-49c2-a0fd-a7757effb5e9\") " pod="kube-system/kube-proxy-nqzkk"
	Jul 08 19:43:28 ha-881000 kubelet[2114]: I0708 19:43:28.214374    2114 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0037978f-9b19-49c2-a0fd-a7757effb5e9-kube-proxy\") pod \"kube-proxy-nqzkk\" (UID: \"0037978f-9b19-49c2-a0fd-a7757effb5e9\") " pod="kube-system/kube-proxy-nqzkk"
	Jul 08 19:43:28 ha-881000 kubelet[2114]: I0708 19:43:28.214385    2114 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0037978f-9b19-49c2-a0fd-a7757effb5e9-xtables-lock\") pod \"kube-proxy-nqzkk\" (UID: \"0037978f-9b19-49c2-a0fd-a7757effb5e9\") " pod="kube-system/kube-proxy-nqzkk"
	Jul 08 19:43:28 ha-881000 kubelet[2114]: I0708 19:43:28.214392    2114 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f8fecb7-8906-46c9-9d55-c56254b8b3d7-lib-modules\") pod \"kindnet-mmchf\" (UID: \"2f8fecb7-8906-46c9-9d55-c56254b8b3d7\") " pod="kube-system/kindnet-mmchf"
	Jul 08 19:43:28 ha-881000 kubelet[2114]: I0708 19:43:28.214400    2114 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/2f8fecb7-8906-46c9-9d55-c56254b8b3d7-cni-cfg\") pod \"kindnet-mmchf\" (UID: \"2f8fecb7-8906-46c9-9d55-c56254b8b3d7\") " pod="kube-system/kindnet-mmchf"
	Jul 08 19:43:28 ha-881000 kubelet[2114]: I0708 19:43:28.214407    2114 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f8fecb7-8906-46c9-9d55-c56254b8b3d7-xtables-lock\") pod \"kindnet-mmchf\" (UID: \"2f8fecb7-8906-46c9-9d55-c56254b8b3d7\") " pod="kube-system/kindnet-mmchf"
	Jul 08 19:43:28 ha-881000 kubelet[2114]: I0708 19:43:28.214414    2114 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvkr9\" (UniqueName: \"kubernetes.io/projected/2f8fecb7-8906-46c9-9d55-c56254b8b3d7-kube-api-access-kvkr9\") pod \"kindnet-mmchf\" (UID: \"2f8fecb7-8906-46c9-9d55-c56254b8b3d7\") " pod="kube-system/kindnet-mmchf"
	Jul 08 19:43:31 ha-881000 kubelet[2114]: I0708 19:43:31.195873    2114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nqzkk" podStartSLOduration=3.195847979 podStartE2EDuration="3.195847979s" podCreationTimestamp="2024-07-08 19:43:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-08 19:43:29.193994268 +0000 UTC m=+15.129727133" watchObservedRunningTime="2024-07-08 19:43:31.195847979 +0000 UTC m=+17.131580886"
	Jul 08 19:43:32 ha-881000 kubelet[2114]: I0708 19:43:32.833798    2114 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	Jul 08 19:43:32 ha-881000 kubelet[2114]: I0708 19:43:32.844121    2114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-mmchf" podStartSLOduration=2.390346401 podStartE2EDuration="4.844106498s" podCreationTimestamp="2024-07-08 19:43:28 +0000 UTC" firstStartedPulling="2024-07-08 19:43:28.500754851 +0000 UTC m=+14.436487716" lastFinishedPulling="2024-07-08 19:43:30.954514948 +0000 UTC m=+16.890247813" observedRunningTime="2024-07-08 19:43:31.195951852 +0000 UTC m=+17.131684717" watchObservedRunningTime="2024-07-08 19:43:32.844106498 +0000 UTC m=+18.779839405"
	Jul 08 19:43:32 ha-881000 kubelet[2114]: I0708 19:43:32.844546    2114 topology_manager.go:215] "Topology Admit Handler" podUID="5a1aa968-b181-4318-a7f2-fb0f94617bd5" podNamespace="kube-system" podName="coredns-7db6d8ff4d-2646x"
	Jul 08 19:43:32 ha-881000 kubelet[2114]: I0708 19:43:32.844652    2114 topology_manager.go:215] "Topology Admit Handler" podUID="57423cc1-b13f-45c7-b2df-71621270a61f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-rlj9v"
	Jul 08 19:43:32 ha-881000 kubelet[2114]: I0708 19:43:32.846475    2114 topology_manager.go:215] "Topology Admit Handler" podUID="62d01d4e-c78c-499e-9905-7ff510f1edea" podNamespace="kube-system" podName="storage-provisioner"
	Jul 08 19:43:33 ha-881000 kubelet[2114]: I0708 19:43:33.044973    2114 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/62d01d4e-c78c-499e-9905-7ff510f1edea-tmp\") pod \"storage-provisioner\" (UID: \"62d01d4e-c78c-499e-9905-7ff510f1edea\") " pod="kube-system/storage-provisioner"
	Jul 08 19:43:33 ha-881000 kubelet[2114]: I0708 19:43:33.045056    2114 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a1aa968-b181-4318-a7f2-fb0f94617bd5-config-volume\") pod \"coredns-7db6d8ff4d-2646x\" (UID: \"5a1aa968-b181-4318-a7f2-fb0f94617bd5\") " pod="kube-system/coredns-7db6d8ff4d-2646x"
	Jul 08 19:43:33 ha-881000 kubelet[2114]: I0708 19:43:33.045068    2114 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzlkk\" (UniqueName: \"kubernetes.io/projected/5a1aa968-b181-4318-a7f2-fb0f94617bd5-kube-api-access-tzlkk\") pod \"coredns-7db6d8ff4d-2646x\" (UID: \"5a1aa968-b181-4318-a7f2-fb0f94617bd5\") " pod="kube-system/coredns-7db6d8ff4d-2646x"
	Jul 08 19:43:33 ha-881000 kubelet[2114]: I0708 19:43:33.045078    2114 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/57423cc1-b13f-45c7-b2df-71621270a61f-config-volume\") pod \"coredns-7db6d8ff4d-rlj9v\" (UID: \"57423cc1-b13f-45c7-b2df-71621270a61f\") " pod="kube-system/coredns-7db6d8ff4d-rlj9v"
	Jul 08 19:43:33 ha-881000 kubelet[2114]: I0708 19:43:33.045087    2114 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp5ns\" (UniqueName: \"kubernetes.io/projected/57423cc1-b13f-45c7-b2df-71621270a61f-kube-api-access-qp5ns\") pod \"coredns-7db6d8ff4d-rlj9v\" (UID: \"57423cc1-b13f-45c7-b2df-71621270a61f\") " pod="kube-system/coredns-7db6d8ff4d-rlj9v"
	Jul 08 19:43:33 ha-881000 kubelet[2114]: I0708 19:43:33.045095    2114 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9xs9\" (UniqueName: \"kubernetes.io/projected/62d01d4e-c78c-499e-9905-7ff510f1edea-kube-api-access-c9xs9\") pod \"storage-provisioner\" (UID: \"62d01d4e-c78c-499e-9905-7ff510f1edea\") " pod="kube-system/storage-provisioner"
	Jul 08 19:43:34 ha-881000 kubelet[2114]: I0708 19:43:34.206806    2114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-2646x" podStartSLOduration=6.206793657 podStartE2EDuration="6.206793657s" podCreationTimestamp="2024-07-08 19:43:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-08 19:43:34.206576803 +0000 UTC m=+20.142309710" watchObservedRunningTime="2024-07-08 19:43:34.206793657 +0000 UTC m=+20.142526564"
	Jul 08 19:43:34 ha-881000 kubelet[2114]: I0708 19:43:34.224712    2114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=6.224699294 podStartE2EDuration="6.224699294s" podCreationTimestamp="2024-07-08 19:43:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-08 19:43:34.220522604 +0000 UTC m=+20.156255511" watchObservedRunningTime="2024-07-08 19:43:34.224699294 +0000 UTC m=+20.160432201"
	
	
	==> storage-provisioner [0ae23ac6a699] <==
	I0708 19:43:33.659595       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0708 19:43:33.665926       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0708 19:43:33.666090       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0708 19:43:33.672847       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0708 19:43:33.673094       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-881000_e7528831-25b3-4257-a2ce-dbc5f5c23e47!
	I0708 19:43:33.683818       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3bb7994d-1374-425c-b6a5-ded5a8749b0f", APIVersion:"v1", ResourceVersion:"393", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-881000_e7528831-25b3-4257-a2ce-dbc5f5c23e47 became leader
	I0708 19:43:33.773516       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-881000_e7528831-25b3-4257-a2ce-dbc5f5c23e47!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p ha-881000 -n ha-881000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-881000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (9.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 stop -v=7 --alsologtostderr
E0708 12:43:38.008839    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/functional-183000/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-881000 stop -v=7 --alsologtostderr: (9.213465333s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr: exit status 7 (67.462125ms)

                                                
                                                
-- stdout --
	ha-881000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 12:43:47.140815    2846 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:43:47.141057    2846 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:43:47.141061    2846 out.go:304] Setting ErrFile to fd 2...
	I0708 12:43:47.141065    2846 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:43:47.141259    2846 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:43:47.141431    2846 out.go:298] Setting JSON to false
	I0708 12:43:47.141444    2846 mustload.go:65] Loading cluster: ha-881000
	I0708 12:43:47.141479    2846 notify.go:220] Checking for updates...
	I0708 12:43:47.141720    2846 config.go:182] Loaded profile config "ha-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:43:47.141727    2846 status.go:255] checking status of ha-881000 ...
	I0708 12:43:47.142000    2846 status.go:330] ha-881000 host status = "Stopped" (err=<nil>)
	I0708 12:43:47.142004    2846 status.go:343] host is not running, skipping remaining checks
	I0708 12:43:47.142007    2846 status.go:257] ha-881000 status: &{Name:ha-881000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr": ha-881000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr": ha-881000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr": ha-881000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000: exit status 7 (31.588959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-881000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (9.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (104.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-881000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
E0708 12:44:59.929493    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/functional-183000/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-darwin-arm64 start -p ha-881000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : (1m42.969071916s)
ha_test.go:566: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr
ha_test.go:571: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr": ha-881000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:574: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr": ha-881000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:577: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr": ha-881000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:580: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr": ha-881000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
ha_test.go:597: expected 3 nodes Ready status to be True, got 
-- stdout --
	' True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 logs -n 25
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:39 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:39 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:39 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:39 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:39 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:39 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:39 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- exec  --             | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- exec  --             | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- exec  -- nslookup    | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| node    | add -p ha-881000 -v=7                | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-881000 node stop m02 -v=7         | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-881000 node start m02 -v=7        | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-881000 -v=7               | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:41 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| stop    | -p ha-881000 -v=7                    | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:41 PDT | 08 Jul 24 12:42 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| start   | -p ha-881000 --wait=true -v=7        | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:42 PDT | 08 Jul 24 12:43 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-881000                    | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:43 PDT |                     |
	| node    | ha-881000 node delete m03 -v=7       | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:43 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| stop    | ha-881000 stop -v=7                  | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:43 PDT | 08 Jul 24 12:43 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| start   | -p ha-881000 --wait=true             | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:43 PDT | 08 Jul 24 12:45 PDT |
	|         | -v=7 --alsologtostderr               |           |         |         |                     |                     |
	|         | --driver=qemu2                       |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/08 12:43:47
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0708 12:43:47.203036    2850 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:43:47.203222    2850 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:43:47.203226    2850 out.go:304] Setting ErrFile to fd 2...
	I0708 12:43:47.203228    2850 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:43:47.203368    2850 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:43:47.204366    2850 out.go:298] Setting JSON to false
	I0708 12:43:47.220335    2850 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2595,"bootTime":1720465232,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0708 12:43:47.220396    2850 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0708 12:43:47.226067    2850 out.go:177] * [ha-881000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0708 12:43:47.233033    2850 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 12:43:47.233084    2850 notify.go:220] Checking for updates...
	I0708 12:43:47.239959    2850 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 12:43:47.242984    2850 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0708 12:43:47.246029    2850 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 12:43:47.248929    2850 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	I0708 12:43:47.252022    2850 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 12:43:47.255376    2850 config.go:182] Loaded profile config "ha-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:43:47.255631    2850 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 12:43:47.259894    2850 out.go:177] * Using the qemu2 driver based on existing profile
	I0708 12:43:47.266997    2850 start.go:297] selected driver: qemu2
	I0708 12:43:47.267005    2850 start.go:901] validating driver "qemu2" against &{Name:ha-881000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.2 ClusterName:ha-881000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 12:43:47.267055    2850 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 12:43:47.269481    2850 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 12:43:47.269519    2850 cni.go:84] Creating CNI manager for ""
	I0708 12:43:47.269525    2850 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0708 12:43:47.269569    2850 start.go:340] cluster config:
	{Name:ha-881000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-881000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 12:43:47.273287    2850 iso.go:125] acquiring lock: {Name:mk0270d312faa6a295feea241390baaf586d8510 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 12:43:47.281009    2850 out.go:177] * Starting "ha-881000" primary control-plane node in "ha-881000" cluster
	I0708 12:43:47.284840    2850 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0708 12:43:47.284857    2850 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0708 12:43:47.284864    2850 cache.go:56] Caching tarball of preloaded images
	I0708 12:43:47.284919    2850 preload.go:173] Found /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0708 12:43:47.284925    2850 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0708 12:43:47.284990    2850 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/config.json ...
	I0708 12:43:47.285421    2850 start.go:360] acquireMachinesLock for ha-881000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 12:43:47.285455    2850 start.go:364] duration metric: took 28.042µs to acquireMachinesLock for "ha-881000"
	I0708 12:43:47.285464    2850 start.go:96] Skipping create...Using existing machine configuration
	I0708 12:43:47.285472    2850 fix.go:54] fixHost starting: 
	I0708 12:43:47.285587    2850 fix.go:112] recreateIfNeeded on ha-881000: state=Stopped err=<nil>
	W0708 12:43:47.285596    2850 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 12:43:47.293796    2850 out.go:177] * Restarting existing qemu2 VM for "ha-881000" ...
	I0708 12:43:47.297910    2850 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:75:66:b4:8a:80 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/disk.qcow2
	I0708 12:43:47.338613    2850 main.go:141] libmachine: STDOUT: 
	I0708 12:43:47.338642    2850 main.go:141] libmachine: STDERR: 
	I0708 12:43:47.338646    2850 main.go:141] libmachine: Attempt 0
	I0708 12:43:47.338656    2850 main.go:141] libmachine: Searching for de:75:66:b4:8a:80 in /var/db/dhcpd_leases ...
	I0708 12:43:47.338720    2850 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0708 12:43:47.338738    2850 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:de:75:66:b4:8a:80 ID:1,de:75:66:b4:8a:80 Lease:0x668c4170}
	I0708 12:43:47.338745    2850 main.go:141] libmachine: Found match: de:75:66:b4:8a:80
	I0708 12:43:47.338752    2850 main.go:141] libmachine: IP: 192.168.105.5
	I0708 12:43:47.338756    2850 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.5)...
	I0708 12:44:06.880157    2850 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/config.json ...
	I0708 12:44:06.880902    2850 machine.go:94] provisionDockerMachine start ...
	I0708 12:44:06.881165    2850 main.go:141] libmachine: Using SSH client type: native
	I0708 12:44:06.881700    2850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10536a920] 0x10536d180 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0708 12:44:06.881713    2850 main.go:141] libmachine: About to run SSH command:
	hostname
	I0708 12:44:06.944368    2850 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0708 12:44:06.944389    2850 buildroot.go:166] provisioning hostname "ha-881000"
	I0708 12:44:06.944458    2850 main.go:141] libmachine: Using SSH client type: native
	I0708 12:44:06.944617    2850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10536a920] 0x10536d180 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0708 12:44:06.944624    2850 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-881000 && echo "ha-881000" | sudo tee /etc/hostname
	I0708 12:44:07.000524    2850 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-881000
	
	I0708 12:44:07.000567    2850 main.go:141] libmachine: Using SSH client type: native
	I0708 12:44:07.000687    2850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10536a920] 0x10536d180 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0708 12:44:07.000698    2850 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-881000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-881000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-881000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 12:44:07.049065    2850 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 12:44:07.049078    2850 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19195-1270/.minikube CaCertPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19195-1270/.minikube}
	I0708 12:44:07.049089    2850 buildroot.go:174] setting up certificates
	I0708 12:44:07.049097    2850 provision.go:84] configureAuth start
	I0708 12:44:07.049100    2850 provision.go:143] copyHostCerts
	I0708 12:44:07.049124    2850 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.pem
	I0708 12:44:07.049183    2850 exec_runner.go:144] found /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.pem, removing ...
	I0708 12:44:07.049188    2850 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.pem
	I0708 12:44:07.049594    2850 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.pem (1078 bytes)
	I0708 12:44:07.049759    2850 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19195-1270/.minikube/cert.pem
	I0708 12:44:07.049786    2850 exec_runner.go:144] found /Users/jenkins/minikube-integration/19195-1270/.minikube/cert.pem, removing ...
	I0708 12:44:07.049790    2850 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19195-1270/.minikube/cert.pem
	I0708 12:44:07.049854    2850 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19195-1270/.minikube/cert.pem (1123 bytes)
	I0708 12:44:07.049950    2850 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19195-1270/.minikube/key.pem
	I0708 12:44:07.049978    2850 exec_runner.go:144] found /Users/jenkins/minikube-integration/19195-1270/.minikube/key.pem, removing ...
	I0708 12:44:07.049982    2850 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19195-1270/.minikube/key.pem
	I0708 12:44:07.050038    2850 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19195-1270/.minikube/key.pem (1675 bytes)
	I0708 12:44:07.050147    2850 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca-key.pem org=jenkins.ha-881000 san=[127.0.0.1 192.168.105.5 ha-881000 localhost minikube]
	I0708 12:44:07.117812    2850 provision.go:177] copyRemoteCerts
	I0708 12:44:07.117840    2850 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 12:44:07.117846    2850 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/id_rsa Username:docker}
	I0708 12:44:07.141822    2850 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0708 12:44:07.141866    2850 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0708 12:44:07.149723    2850 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0708 12:44:07.149757    2850 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0708 12:44:07.157502    2850 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0708 12:44:07.157533    2850 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 12:44:07.165410    2850 provision.go:87] duration metric: took 116.310917ms to configureAuth
	I0708 12:44:07.165421    2850 buildroot.go:189] setting minikube options for container-runtime
	I0708 12:44:07.165537    2850 config.go:182] Loaded profile config "ha-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:44:07.165568    2850 main.go:141] libmachine: Using SSH client type: native
	I0708 12:44:07.165650    2850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10536a920] 0x10536d180 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0708 12:44:07.165656    2850 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0708 12:44:07.211537    2850 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0708 12:44:07.211545    2850 buildroot.go:70] root file system type: tmpfs
	I0708 12:44:07.211595    2850 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0708 12:44:07.211635    2850 main.go:141] libmachine: Using SSH client type: native
	I0708 12:44:07.211741    2850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10536a920] 0x10536d180 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0708 12:44:07.211773    2850 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0708 12:44:07.259492    2850 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0708 12:44:07.259544    2850 main.go:141] libmachine: Using SSH client type: native
	I0708 12:44:07.259647    2850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10536a920] 0x10536d180 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0708 12:44:07.259656    2850 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0708 12:44:08.699211    2850 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0708 12:44:08.699224    2850 machine.go:97] duration metric: took 1.818356625s to provisionDockerMachine
	I0708 12:44:08.699231    2850 start.go:293] postStartSetup for "ha-881000" (driver="qemu2")
	I0708 12:44:08.699237    2850 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 12:44:08.699306    2850 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 12:44:08.699315    2850 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/id_rsa Username:docker}
	I0708 12:44:08.724566    2850 ssh_runner.go:195] Run: cat /etc/os-release
	I0708 12:44:08.726115    2850 info.go:137] Remote host: Buildroot 2023.02.9
	I0708 12:44:08.726123    2850 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19195-1270/.minikube/addons for local assets ...
	I0708 12:44:08.726215    2850 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19195-1270/.minikube/files for local assets ...
	I0708 12:44:08.726342    2850 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem -> 17672.pem in /etc/ssl/certs
	I0708 12:44:08.726347    2850 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem -> /etc/ssl/certs/17672.pem
	I0708 12:44:08.726466    2850 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0708 12:44:08.729622    2850 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem --> /etc/ssl/certs/17672.pem (1708 bytes)
	I0708 12:44:08.737764    2850 start.go:296] duration metric: took 38.5285ms for postStartSetup
	I0708 12:44:08.737776    2850 fix.go:56] duration metric: took 21.452818125s for fixHost
	I0708 12:44:08.737808    2850 main.go:141] libmachine: Using SSH client type: native
	I0708 12:44:08.737906    2850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10536a920] 0x10536d180 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0708 12:44:08.737913    2850 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0708 12:44:08.782598    2850 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720467848.618609754
	
	I0708 12:44:08.782613    2850 fix.go:216] guest clock: 1720467848.618609754
	I0708 12:44:08.782617    2850 fix.go:229] Guest: 2024-07-08 12:44:08.618609754 -0700 PDT Remote: 2024-07-08 12:44:08.737777 -0700 PDT m=+21.554817334 (delta=-119.167246ms)
	I0708 12:44:08.782628    2850 fix.go:200] guest clock delta is within tolerance: -119.167246ms
	I0708 12:44:08.782631    2850 start.go:83] releasing machines lock for "ha-881000", held for 21.497684416s
	I0708 12:44:08.782905    2850 ssh_runner.go:195] Run: cat /version.json
	I0708 12:44:08.782909    2850 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0708 12:44:08.782912    2850 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/id_rsa Username:docker}
	I0708 12:44:08.782927    2850 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/id_rsa Username:docker}
	I0708 12:44:08.850123    2850 ssh_runner.go:195] Run: systemctl --version
	I0708 12:44:08.852483    2850 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0708 12:44:08.854488    2850 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0708 12:44:08.854513    2850 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0708 12:44:08.860424    2850 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0708 12:44:08.860432    2850 start.go:494] detecting cgroup driver to use...
	I0708 12:44:08.860498    2850 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 12:44:08.866999    2850 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0708 12:44:08.870556    2850 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0708 12:44:08.874028    2850 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0708 12:44:08.874056    2850 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0708 12:44:08.877532    2850 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0708 12:44:08.881087    2850 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0708 12:44:08.884937    2850 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0708 12:44:08.888816    2850 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0708 12:44:08.892627    2850 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0708 12:44:08.896492    2850 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0708 12:44:08.900576    2850 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0708 12:44:08.904522    2850 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 12:44:08.908642    2850 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 12:44:08.912271    2850 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 12:44:08.995342    2850 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0708 12:44:09.004364    2850 start.go:494] detecting cgroup driver to use...
	I0708 12:44:09.004428    2850 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0708 12:44:09.013252    2850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 12:44:09.020793    2850 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0708 12:44:09.031124    2850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 12:44:09.036852    2850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0708 12:44:09.042266    2850 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0708 12:44:09.088268    2850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0708 12:44:09.094797    2850 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 12:44:09.101383    2850 ssh_runner.go:195] Run: which cri-dockerd
	I0708 12:44:09.102715    2850 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0708 12:44:09.105836    2850 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0708 12:44:09.111803    2850 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0708 12:44:09.186218    2850 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0708 12:44:09.271456    2850 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0708 12:44:09.271510    2850 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0708 12:44:09.277562    2850 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 12:44:09.358997    2850 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0708 12:44:11.572555    2850 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.213594875s)
	I0708 12:44:11.572614    2850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0708 12:44:11.577969    2850 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0708 12:44:11.585005    2850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0708 12:44:11.590567    2850 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0708 12:44:11.674609    2850 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0708 12:44:11.758014    2850 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 12:44:11.829180    2850 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0708 12:44:11.835750    2850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0708 12:44:11.841363    2850 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 12:44:11.922218    2850 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0708 12:44:11.946744    2850 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0708 12:44:11.946808    2850 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0708 12:44:11.949414    2850 start.go:562] Will wait 60s for crictl version
	I0708 12:44:11.949450    2850 ssh_runner.go:195] Run: which crictl
	I0708 12:44:11.951025    2850 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0708 12:44:11.966187    2850 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0708 12:44:11.966254    2850 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0708 12:44:11.977143    2850 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0708 12:44:11.990230    2850 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0708 12:44:11.990352    2850 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0708 12:44:11.991832    2850 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 12:44:11.996404    2850 kubeadm.go:877] updating cluster {Name:ha-881000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 C
lusterName:ha-881000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0708 12:44:11.996453    2850 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0708 12:44:11.996490    2850 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0708 12:44:12.002536    2850 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	kindest/kindnetd:v20240513-cd2ac642
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0708 12:44:12.002544    2850 docker.go:615] Images already preloaded, skipping extraction
	I0708 12:44:12.002600    2850 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0708 12:44:12.008391    2850 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	kindest/kindnetd:v20240513-cd2ac642
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0708 12:44:12.008408    2850 cache_images.go:84] Images are preloaded, skipping loading
	I0708 12:44:12.008412    2850 kubeadm.go:928] updating node { 192.168.105.5 8443 v1.30.2 docker true true} ...
	I0708 12:44:12.008472    2850 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-881000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-881000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0708 12:44:12.008526    2850 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0708 12:44:12.016035    2850 cni.go:84] Creating CNI manager for ""
	I0708 12:44:12.016043    2850 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0708 12:44:12.016048    2850 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0708 12:44:12.016059    2850 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.5 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-881000 NodeName:ha-881000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0708 12:44:12.016115    2850 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-881000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0708 12:44:12.016169    2850 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0708 12:44:12.020558    2850 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 12:44:12.020590    2850 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0708 12:44:12.024214    2850 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0708 12:44:12.030381    2850 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 12:44:12.036219    2850 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0708 12:44:12.042370    2850 ssh_runner.go:195] Run: grep 192.168.105.5	control-plane.minikube.internal$ /etc/hosts
	I0708 12:44:12.043714    2850 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.5	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 12:44:12.048099    2850 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 12:44:12.123087    2850 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 12:44:12.130120    2850 certs.go:68] Setting up /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000 for IP: 192.168.105.5
	I0708 12:44:12.130127    2850 certs.go:194] generating shared ca certs ...
	I0708 12:44:12.130135    2850 certs.go:226] acquiring lock for ca certs: {Name:mka13b605a6983b2618b91f3a0bdec43c132a4e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:44:12.130297    2850 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.key
	I0708 12:44:12.130354    2850 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/proxy-client-ca.key
	I0708 12:44:12.130361    2850 certs.go:256] generating profile certs ...
	I0708 12:44:12.130430    2850 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/client.key
	I0708 12:44:12.130487    2850 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.key.174b6ad8
	I0708 12:44:12.130531    2850 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/proxy-client.key
	I0708 12:44:12.130540    2850 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0708 12:44:12.130552    2850 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0708 12:44:12.130563    2850 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0708 12:44:12.130574    2850 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0708 12:44:12.130584    2850 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0708 12:44:12.130604    2850 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0708 12:44:12.130622    2850 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0708 12:44:12.130633    2850 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0708 12:44:12.130700    2850 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/1767.pem (1338 bytes)
	W0708 12:44:12.130737    2850 certs.go:480] ignoring /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/1767_empty.pem, impossibly tiny 0 bytes
	I0708 12:44:12.130742    2850 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca-key.pem (1679 bytes)
	I0708 12:44:12.130763    2850 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem (1078 bytes)
	I0708 12:44:12.130783    2850 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem (1123 bytes)
	I0708 12:44:12.130805    2850 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/key.pem (1675 bytes)
	I0708 12:44:12.130843    2850 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem (1708 bytes)
	I0708 12:44:12.130869    2850 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem -> /usr/share/ca-certificates/17672.pem
	I0708 12:44:12.130881    2850 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0708 12:44:12.130891    2850 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/1767.pem -> /usr/share/ca-certificates/1767.pem
	I0708 12:44:12.131188    2850 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 12:44:12.143161    2850 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0708 12:44:12.155852    2850 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 12:44:12.168576    2850 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 12:44:12.179921    2850 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I0708 12:44:12.190763    2850 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0708 12:44:12.202701    2850 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 12:44:12.213932    2850 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0708 12:44:12.222730    2850 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem --> /usr/share/ca-certificates/17672.pem (1708 bytes)
	I0708 12:44:12.233466    2850 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 12:44:12.242648    2850 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/1767.pem --> /usr/share/ca-certificates/1767.pem (1338 bytes)
	I0708 12:44:12.252255    2850 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 12:44:12.258369    2850 ssh_runner.go:195] Run: openssl version
	I0708 12:44:12.260678    2850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17672.pem && ln -fs /usr/share/ca-certificates/17672.pem /etc/ssl/certs/17672.pem"
	I0708 12:44:12.264831    2850 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17672.pem
	I0708 12:44:12.266447    2850 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  8 19:34 /usr/share/ca-certificates/17672.pem
	I0708 12:44:12.266468    2850 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17672.pem
	I0708 12:44:12.268480    2850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17672.pem /etc/ssl/certs/3ec20f2e.0"
	I0708 12:44:12.272431    2850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 12:44:12.276375    2850 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 12:44:12.277931    2850 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 12:44:12.277953    2850 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 12:44:12.279904    2850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 12:44:12.283758    2850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1767.pem && ln -fs /usr/share/ca-certificates/1767.pem /etc/ssl/certs/1767.pem"
	I0708 12:44:12.287995    2850 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1767.pem
	I0708 12:44:12.289585    2850 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  8 19:34 /usr/share/ca-certificates/1767.pem
	I0708 12:44:12.289604    2850 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1767.pem
	I0708 12:44:12.291681    2850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1767.pem /etc/ssl/certs/51391683.0"
	I0708 12:44:12.295518    2850 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 12:44:12.297115    2850 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0708 12:44:12.299246    2850 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0708 12:44:12.301320    2850 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0708 12:44:12.303463    2850 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0708 12:44:12.305528    2850 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0708 12:44:12.307618    2850 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0708 12:44:12.309856    2850 kubeadm.go:391] StartCluster: {Name:ha-881000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clus
terName:ha-881000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 12:44:12.309921    2850 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0708 12:44:12.315094    2850 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0708 12:44:12.318646    2850 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0708 12:44:12.318652    2850 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0708 12:44:12.318654    2850 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0708 12:44:12.318674    2850 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0708 12:44:12.321937    2850 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0708 12:44:12.322222    2850 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-881000" does not appear in /Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 12:44:12.322273    2850 kubeconfig.go:62] /Users/jenkins/minikube-integration/19195-1270/kubeconfig needs updating (will repair): [kubeconfig missing "ha-881000" cluster setting kubeconfig missing "ha-881000" context setting]
	I0708 12:44:12.322403    2850 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/kubeconfig: {Name:mkd06393ca6fb9ad91b614216d70dbd8a552e45d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:44:12.322885    2850 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 12:44:12.323014    2850 kapi.go:59] client config for ha-881000: &rest.Config{Host:"https://192.168.105.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/client.key", CAFile:"/Users/jenkins/minikube-integration/19195-1270/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1066fb4f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0708 12:44:12.323226    2850 cert_rotation.go:137] Starting client certificate rotation controller
	I0708 12:44:12.323330    2850 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0708 12:44:12.326595    2850 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.105.5
	I0708 12:44:12.326611    2850 kubeadm.go:1154] stopping kube-system containers ...
	I0708 12:44:12.326653    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0708 12:44:12.333291    2850 docker.go:483] Stopping containers: [57f745d9e2f1 e5decdf53e42 0ae23ac6a699 e5df0a87fa90 e337c3f92f0c 1752461159c8 8c20b27d4019 e3b0434a308b 52b9dd42202b f031f136a08f ed9f0e91126a 5c4705f221f3 db173c1aa7e6 cc323cbcdc6d e9a1e4f9ec7d 109f63f7b186 59d4e027b086 3994029f9ba4]
	I0708 12:44:12.333349    2850 ssh_runner.go:195] Run: docker stop 57f745d9e2f1 e5decdf53e42 0ae23ac6a699 e5df0a87fa90 e337c3f92f0c 1752461159c8 8c20b27d4019 e3b0434a308b 52b9dd42202b f031f136a08f ed9f0e91126a 5c4705f221f3 db173c1aa7e6 cc323cbcdc6d e9a1e4f9ec7d 109f63f7b186 59d4e027b086 3994029f9ba4
	I0708 12:44:12.339775    2850 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0708 12:44:12.346385    2850 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 12:44:12.349708    2850 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 12:44:12.349715    2850 kubeadm.go:156] found existing configuration files:
	
	I0708 12:44:12.349734    2850 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0708 12:44:12.353095    2850 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 12:44:12.353120    2850 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 12:44:12.356614    2850 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0708 12:44:12.360206    2850 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 12:44:12.360235    2850 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 12:44:12.363635    2850 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0708 12:44:12.366686    2850 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 12:44:12.366710    2850 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 12:44:12.369771    2850 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0708 12:44:12.372940    2850 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 12:44:12.372971    2850 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 12:44:12.376521    2850 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 12:44:12.379929    2850 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 12:44:12.429414    2850 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 12:44:13.059616    2850 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0708 12:44:13.177332    2850 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 12:44:13.215247    2850 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0708 12:44:13.254347    2850 api_server.go:52] waiting for apiserver process to appear ...
	I0708 12:44:13.254455    2850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 12:44:13.756509    2850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 12:44:14.256490    2850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 12:44:14.261485    2850 api_server.go:72] duration metric: took 1.007164792s to wait for apiserver process to appear ...
	I0708 12:44:14.261494    2850 api_server.go:88] waiting for apiserver healthz status ...
	I0708 12:44:14.261503    2850 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I0708 12:44:15.464003    2850 api_server.go:279] https://192.168.105.5:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0708 12:44:15.464018    2850 api_server.go:103] status: https://192.168.105.5:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0708 12:44:15.464029    2850 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I0708 12:44:15.503680    2850 api_server.go:279] https://192.168.105.5:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 12:44:15.503696    2850 api_server.go:103] status: https://192.168.105.5:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 12:44:15.763559    2850 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I0708 12:44:15.766515    2850 api_server.go:279] https://192.168.105.5:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 12:44:15.766528    2850 api_server.go:103] status: https://192.168.105.5:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 12:44:16.263502    2850 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I0708 12:44:16.266091    2850 api_server.go:279] https://192.168.105.5:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 12:44:16.266101    2850 api_server.go:103] status: https://192.168.105.5:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 12:44:16.763535    2850 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I0708 12:44:16.766370    2850 api_server.go:279] https://192.168.105.5:8443/healthz returned 200:
	ok
	I0708 12:44:16.766408    2850 round_trippers.go:463] GET https://192.168.105.5:8443/version
	I0708 12:44:16.766412    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:16.766416    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:16.766419    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:16.770263    2850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 12:44:16.770305    2850 api_server.go:141] control plane version: v1.30.2
	I0708 12:44:16.770312    2850 api_server.go:131] duration metric: took 2.50887525s to wait for apiserver health ...
	I0708 12:44:16.770316    2850 cni.go:84] Creating CNI manager for ""
	I0708 12:44:16.770320    2850 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0708 12:44:16.774540    2850 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0708 12:44:16.778515    2850 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0708 12:44:16.780799    2850 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0708 12:44:16.780805    2850 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0708 12:44:16.787435    2850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0708 12:44:16.998658    2850 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 12:44:16.998773    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0708 12:44:16.998777    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:16.998782    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:16.998785    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:17.000294    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:17.004459    2850 system_pods.go:59] 9 kube-system pods found
	I0708 12:44:17.004471    2850 system_pods.go:61] "coredns-7db6d8ff4d-2646x" [5a1aa968-b181-4318-a7f2-fb0f94617bd5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 12:44:17.004474    2850 system_pods.go:61] "coredns-7db6d8ff4d-rlj9v" [57423cc1-b13f-45c7-b2df-71621270a61f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 12:44:17.004478    2850 system_pods.go:61] "etcd-ha-881000" [b905dbae-009a-44f3-87e4-756dfae87ce6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0708 12:44:17.004481    2850 system_pods.go:61] "kindnet-mmchf" [2f8fecb7-8906-46c9-9d55-c56254b8b3d7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0708 12:44:17.004483    2850 system_pods.go:61] "kube-apiserver-ha-881000" [ea5dbd32-5574-42d6-9efd-3956e499027a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0708 12:44:17.004487    2850 system_pods.go:61] "kube-controller-manager-ha-881000" [3f0c772a-e298-47e5-a20d-4201060d8e09] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0708 12:44:17.004489    2850 system_pods.go:61] "kube-proxy-nqzkk" [0037978f-9b19-49c2-a0fd-a7757effb5e9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0708 12:44:17.004501    2850 system_pods.go:61] "kube-scheduler-ha-881000" [03ce3397-c2e8-4b90-a33c-11fb0368a30e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0708 12:44:17.004505    2850 system_pods.go:61] "storage-provisioner" [62d01d4e-c78c-499e-9905-7ff510f1edea] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0708 12:44:17.004510    2850 system_pods.go:74] duration metric: took 5.838958ms to wait for pod list to return data ...
	I0708 12:44:17.004515    2850 node_conditions.go:102] verifying NodePressure condition ...
	I0708 12:44:17.004542    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes
	I0708 12:44:17.004545    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:17.004548    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:17.004550    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:17.005727    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:17.006038    2850 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 12:44:17.006044    2850 node_conditions.go:123] node cpu capacity is 2
	I0708 12:44:17.006051    2850 node_conditions.go:105] duration metric: took 1.533833ms to run NodePressure ...
	I0708 12:44:17.006057    2850 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 12:44:17.245923    2850 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0708 12:44:17.245984    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0708 12:44:17.245988    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:17.245991    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:17.245999    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:17.247183    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:17.248109    2850 kubeadm.go:733] kubelet initialised
	I0708 12:44:17.248118    2850 kubeadm.go:734] duration metric: took 2.183ms waiting for restarted kubelet to initialise ...
	I0708 12:44:17.248122    2850 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 12:44:17.248146    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0708 12:44:17.248150    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:17.248154    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:17.248157    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:17.249946    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:17.252016    2850 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-2646x" in "kube-system" namespace to be "Ready" ...
	I0708 12:44:17.252049    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:44:17.252052    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:17.252056    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:17.252058    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:17.252777    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:44:17.253056    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:17.253060    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:17.253064    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:17.253067    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:17.253789    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:44:17.254087    2850 pod_ready.go:97] node "ha-881000" hosting pod "coredns-7db6d8ff4d-2646x" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-881000" has status "Ready":"False"
	I0708 12:44:17.254093    2850 pod_ready.go:81] duration metric: took 2.068791ms for pod "coredns-7db6d8ff4d-2646x" in "kube-system" namespace to be "Ready" ...
	E0708 12:44:17.254098    2850 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-881000" hosting pod "coredns-7db6d8ff4d-2646x" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-881000" has status "Ready":"False"
	I0708 12:44:17.254101    2850 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-rlj9v" in "kube-system" namespace to be "Ready" ...
	I0708 12:44:17.254121    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rlj9v
	I0708 12:44:17.254124    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:17.254128    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:17.254130    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:17.254769    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:44:17.255058    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:17.255061    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:17.255064    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:17.255066    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:17.255634    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:44:17.255780    2850 pod_ready.go:97] node "ha-881000" hosting pod "coredns-7db6d8ff4d-rlj9v" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-881000" has status "Ready":"False"
	I0708 12:44:17.255786    2850 pod_ready.go:81] duration metric: took 1.681917ms for pod "coredns-7db6d8ff4d-rlj9v" in "kube-system" namespace to be "Ready" ...
	E0708 12:44:17.255789    2850 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-881000" hosting pod "coredns-7db6d8ff4d-rlj9v" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-881000" has status "Ready":"False"
	I0708 12:44:17.255791    2850 pod_ready.go:78] waiting up to 4m0s for pod "etcd-ha-881000" in "kube-system" namespace to be "Ready" ...
	I0708 12:44:17.255807    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-881000
	I0708 12:44:17.255810    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:17.255813    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:17.255815    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:17.256424    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:44:17.256669    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:17.256672    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:17.256675    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:17.256678    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:17.257307    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:44:17.257571    2850 pod_ready.go:97] node "ha-881000" hosting pod "etcd-ha-881000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-881000" has status "Ready":"False"
	I0708 12:44:17.257575    2850 pod_ready.go:81] duration metric: took 1.781792ms for pod "etcd-ha-881000" in "kube-system" namespace to be "Ready" ...
	E0708 12:44:17.257578    2850 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-881000" hosting pod "etcd-ha-881000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-881000" has status "Ready":"False"
	I0708 12:44:17.257583    2850 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-ha-881000" in "kube-system" namespace to be "Ready" ...
	I0708 12:44:17.257597    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-881000
	I0708 12:44:17.257599    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:17.257602    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:17.257605    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:17.258263    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:44:17.400775    2850 request.go:629] Waited for 142.183583ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:17.400814    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:17.400819    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:17.400823    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:17.400833    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:17.405958    2850 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0708 12:44:17.406293    2850 pod_ready.go:97] node "ha-881000" hosting pod "kube-apiserver-ha-881000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-881000" has status "Ready":"False"
	I0708 12:44:17.406306    2850 pod_ready.go:81] duration metric: took 148.723459ms for pod "kube-apiserver-ha-881000" in "kube-system" namespace to be "Ready" ...
	E0708 12:44:17.406313    2850 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-881000" hosting pod "kube-apiserver-ha-881000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-881000" has status "Ready":"False"
	I0708 12:44:17.406317    2850 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-ha-881000" in "kube-system" namespace to be "Ready" ...
	I0708 12:44:17.600801    2850 request.go:629] Waited for 194.4485ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-881000
	I0708 12:44:17.600827    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-881000
	I0708 12:44:17.600831    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:17.600835    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:17.600838    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:17.601946    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:17.799018    2850 request.go:629] Waited for 196.693583ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:17.799048    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:17.799051    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:17.799056    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:17.799058    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:17.799927    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:44:17.800125    2850 pod_ready.go:97] node "ha-881000" hosting pod "kube-controller-manager-ha-881000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-881000" has status "Ready":"False"
	I0708 12:44:17.800133    2850 pod_ready.go:81] duration metric: took 393.821667ms for pod "kube-controller-manager-ha-881000" in "kube-system" namespace to be "Ready" ...
	E0708 12:44:17.800141    2850 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-881000" hosting pod "kube-controller-manager-ha-881000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-881000" has status "Ready":"False"
	I0708 12:44:17.800145    2850 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nqzkk" in "kube-system" namespace to be "Ready" ...
	I0708 12:44:18.000719    2850 request.go:629] Waited for 200.550291ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nqzkk
	I0708 12:44:18.000760    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nqzkk
	I0708 12:44:18.000764    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:18.000767    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:18.000771    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:18.001795    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:18.200733    2850 request.go:629] Waited for 198.662625ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:18.200760    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:18.200763    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:18.200768    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:18.200771    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:18.201703    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:44:18.201906    2850 pod_ready.go:97] node "ha-881000" hosting pod "kube-proxy-nqzkk" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-881000" has status "Ready":"False"
	I0708 12:44:18.201917    2850 pod_ready.go:81] duration metric: took 401.777959ms for pod "kube-proxy-nqzkk" in "kube-system" namespace to be "Ready" ...
	E0708 12:44:18.201922    2850 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-881000" hosting pod "kube-proxy-nqzkk" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-881000" has status "Ready":"False"
	I0708 12:44:18.201926    2850 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-ha-881000" in "kube-system" namespace to be "Ready" ...
	I0708 12:44:18.400713    2850 request.go:629] Waited for 198.750875ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-881000
	I0708 12:44:18.400740    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-881000
	I0708 12:44:18.400743    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:18.400754    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:18.400770    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:18.401682    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:44:18.600687    2850 request.go:629] Waited for 198.768834ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:18.600709    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:18.600713    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:18.600717    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:18.600720    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:18.601755    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:18.601960    2850 pod_ready.go:97] node "ha-881000" hosting pod "kube-scheduler-ha-881000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-881000" has status "Ready":"False"
	I0708 12:44:18.601967    2850 pod_ready.go:81] duration metric: took 400.041458ms for pod "kube-scheduler-ha-881000" in "kube-system" namespace to be "Ready" ...
	E0708 12:44:18.601972    2850 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-881000" hosting pod "kube-scheduler-ha-881000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-881000" has status "Ready":"False"
	I0708 12:44:18.601976    2850 pod_ready.go:38] duration metric: took 1.353880375s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 12:44:18.601986    2850 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0708 12:44:18.606641    2850 ops.go:34] apiserver oom_adj: -16
	I0708 12:44:18.606648    2850 kubeadm.go:591] duration metric: took 6.288141125s to restartPrimaryControlPlane
	I0708 12:44:18.606652    2850 kubeadm.go:393] duration metric: took 6.296948166s to StartCluster
	I0708 12:44:18.606660    2850 settings.go:142] acquiring lock: {Name:mka0c397a57d617e1d77508d22cc3adb2edf5927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:44:18.606747    2850 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 12:44:18.607091    2850 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/kubeconfig: {Name:mkd06393ca6fb9ad91b614216d70dbd8a552e45d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:44:18.607314    2850 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 12:44:18.607389    2850 config.go:182] Loaded profile config "ha-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:44:18.607375    2850 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0708 12:44:18.607406    2850 addons.go:69] Setting storage-provisioner=true in profile "ha-881000"
	I0708 12:44:18.607418    2850 addons.go:234] Setting addon storage-provisioner=true in "ha-881000"
	W0708 12:44:18.607421    2850 addons.go:243] addon storage-provisioner should already be in state true
	I0708 12:44:18.607424    2850 addons.go:69] Setting default-storageclass=true in profile "ha-881000"
	I0708 12:44:18.607432    2850 host.go:66] Checking if "ha-881000" exists ...
	I0708 12:44:18.607437    2850 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-881000"
	I0708 12:44:18.608205    2850 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 12:44:18.608335    2850 kapi.go:59] client config for ha-881000: &rest.Config{Host:"https://192.168.105.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/client.key", CAFile:"/Users/jenkins/minikube-integration/19195-1270/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1066fb4f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0708 12:44:18.608457    2850 addons.go:234] Setting addon default-storageclass=true in "ha-881000"
	W0708 12:44:18.608462    2850 addons.go:243] addon default-storageclass should already be in state true
	I0708 12:44:18.608469    2850 host.go:66] Checking if "ha-881000" exists ...
	I0708 12:44:18.610726    2850 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0708 12:44:18.610731    2850 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0708 12:44:18.610737    2850 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/id_rsa Username:docker}
	I0708 12:44:18.614244    2850 out.go:177] * Verifying Kubernetes components...
	I0708 12:44:18.617317    2850 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 12:44:18.620243    2850 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 12:44:18.624383    2850 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 12:44:18.624391    2850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0708 12:44:18.624397    2850 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/id_rsa Username:docker}
	I0708 12:44:18.726309    2850 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 12:44:18.733759    2850 node_ready.go:35] waiting up to 6m0s for node "ha-881000" to be "Ready" ...
	I0708 12:44:18.735513    2850 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0708 12:44:18.745055    2850 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 12:44:18.799161    2850 request.go:629] Waited for 65.348458ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:18.799201    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:18.799204    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:18.799208    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:18.799210    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:18.799947    2850 round_trippers.go:463] GET https://192.168.105.5:8443/apis/storage.k8s.io/v1/storageclasses
	I0708 12:44:18.799951    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:18.799955    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:18.799957    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:18.800673    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:18.801351    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:18.801570    2850 round_trippers.go:463] PUT https://192.168.105.5:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0708 12:44:18.801577    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:18.801580    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:18.801582    2850 round_trippers.go:473]     Content-Type: application/json
	I0708 12:44:18.801585    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:18.802844    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:19.059311    2850 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0708 12:44:19.067209    2850 addons.go:510] duration metric: took 459.852666ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0708 12:44:19.235844    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:19.235851    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:19.235856    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:19.235859    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:19.236902    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:19.735892    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:19.735912    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:19.735916    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:19.735919    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:19.737534    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:20.235862    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:20.235876    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:20.235881    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:20.235883    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:20.237086    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:20.735905    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:20.735922    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:20.735927    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:20.735930    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:20.737429    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:20.737780    2850 node_ready.go:53] node "ha-881000" has status "Ready":"False"
	I0708 12:44:21.235787    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:21.235797    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:21.235801    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:21.235804    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:21.238836    2850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 12:44:21.735864    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:21.735882    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:21.735887    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:21.735890    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:21.737662    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:22.235745    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:22.235752    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:22.235756    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:22.235758    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:22.237129    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:22.735795    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:22.735809    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:22.735814    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:22.735816    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:22.737382    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:23.235749    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:23.235765    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:23.235775    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:23.235778    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:23.236820    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:23.237151    2850 node_ready.go:53] node "ha-881000" has status "Ready":"False"
	I0708 12:44:23.735786    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:23.735801    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:23.735806    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:23.735809    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:23.737333    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:24.235786    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:24.235803    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:24.235822    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:24.235825    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:24.237083    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:24.735772    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:24.735789    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:24.735794    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:24.735820    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:24.737353    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:25.235676    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:25.235686    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:25.235689    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:25.235691    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:25.236773    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:25.735738    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:25.735757    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:25.735763    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:25.735765    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:25.737437    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:25.737856    2850 node_ready.go:53] node "ha-881000" has status "Ready":"False"
	I0708 12:44:26.234556    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:26.234577    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:26.234587    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:26.234589    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:26.235839    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:26.735724    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:26.735739    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:26.735743    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:26.735746    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:26.736780    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:27.235663    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:27.235677    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:27.235684    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:27.235687    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:27.236728    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:27.735660    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:27.735676    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:27.735681    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:27.735683    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:27.737284    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:28.235615    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:28.235623    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:28.235627    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:28.235629    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:28.236823    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:28.237230    2850 node_ready.go:53] node "ha-881000" has status "Ready":"False"
	I0708 12:44:28.735637    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:28.735647    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:28.735651    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:28.735654    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:28.736768    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:29.235642    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:29.235655    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:29.235660    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:29.235662    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:29.236759    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:29.735679    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:29.735694    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:29.735699    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:29.735702    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:29.737248    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:30.235715    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:30.235732    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:30.235736    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:30.235738    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:30.237139    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:30.237360    2850 node_ready.go:53] node "ha-881000" has status "Ready":"False"
	I0708 12:44:30.735612    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:30.735621    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:30.735625    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:30.735628    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:30.737092    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:31.235569    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:31.235579    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:31.235582    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:31.235584    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:31.236633    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:31.735612    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:31.735629    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:31.735634    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:31.735636    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:31.737295    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:32.235587    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:32.235601    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:32.235606    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:32.235608    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:32.236778    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:32.735526    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:32.735536    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:32.735540    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:32.735543    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:32.736711    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:32.736963    2850 node_ready.go:53] node "ha-881000" has status "Ready":"False"
	I0708 12:44:33.235522    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:33.235531    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:33.235535    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:33.235537    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:33.236642    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:33.735561    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:33.735579    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:33.735583    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:33.735587    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:33.737423    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:34.234768    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:34.234774    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:34.234778    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:34.234782    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:34.235720    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:44:34.735499    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:34.735513    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:34.735517    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:34.735519    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:34.737106    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:34.737386    2850 node_ready.go:53] node "ha-881000" has status "Ready":"False"
	I0708 12:44:35.235461    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:35.235468    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:35.235471    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:35.235473    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:35.236519    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:35.735547    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:35.735566    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:35.735571    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:35.735573    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:35.737177    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:36.235487    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:36.235497    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:36.235501    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:36.235504    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:36.236545    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:36.735449    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:36.735460    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:36.735463    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:36.735465    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:36.736519    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:37.235434    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:37.235448    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:37.235453    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:37.235455    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:37.236808    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:37.237168    2850 node_ready.go:53] node "ha-881000" has status "Ready":"False"
	I0708 12:44:37.735447    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:37.735461    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:37.735466    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:37.735468    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:37.737064    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:38.235384    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:38.235396    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:38.235400    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:38.235402    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:38.236438    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:38.735456    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:38.735486    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:38.735492    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:38.735494    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:38.737102    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:39.235386    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:39.235400    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:39.235405    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:39.235406    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:39.236435    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:39.735371    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:39.735383    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:39.735388    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:39.735389    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:39.736892    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:39.737147    2850 node_ready.go:53] node "ha-881000" has status "Ready":"False"
	I0708 12:44:40.235353    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:40.235365    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:40.235369    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:40.235371    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:40.236374    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:44:40.735358    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:40.735368    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:40.735373    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:40.735375    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:40.736868    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:41.235350    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:41.235362    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:41.235366    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:41.235368    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:41.236415    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:41.734992    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:41.735008    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:41.735015    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:41.735017    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:41.736518    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:42.235065    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:42.235078    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:42.235083    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:42.235090    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:42.236101    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:42.236392    2850 node_ready.go:53] node "ha-881000" has status "Ready":"False"
	I0708 12:44:42.735337    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:42.735358    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:42.735363    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:42.735365    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:42.737010    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:43.235275    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:43.235291    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:43.235296    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:43.235298    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:43.236740    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:43.735299    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:43.735318    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:43.735335    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:43.735344    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:43.736716    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:44.234553    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:44.234566    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:44.234571    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:44.234573    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:44.235683    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:44.735264    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:44.735279    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:44.735289    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:44.735295    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:44.737018    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:44.737323    2850 node_ready.go:53] node "ha-881000" has status "Ready":"False"
	I0708 12:44:45.235322    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:45.235339    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:45.235343    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:45.235346    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:45.236568    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:45.735272    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:45.735286    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:45.735291    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:45.735292    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:45.736686    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:46.235249    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:46.235265    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:46.235269    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:46.235274    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:46.236232    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:44:46.734093    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:46.734107    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:46.734111    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:46.734113    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:46.735581    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:47.235190    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:47.235202    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:47.235206    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:47.235209    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:47.236406    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:47.236650    2850 node_ready.go:53] node "ha-881000" has status "Ready":"False"
	I0708 12:44:47.735215    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:47.735228    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:47.735232    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:47.735234    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:47.736259    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:48.233546    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:48.233578    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:48.233583    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:48.233585    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:48.234802    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:48.735158    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:48.735172    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:48.735177    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:48.735182    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:48.736872    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:49.233644    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:49.233670    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:49.233674    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:49.233677    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:49.234965    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:49.735126    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:49.735140    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:49.735145    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:49.735147    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:49.736687    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:49.736960    2850 node_ready.go:53] node "ha-881000" has status "Ready":"False"
	I0708 12:44:50.235134    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:50.235150    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:50.235154    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:50.235156    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:50.236547    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:50.735176    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:50.735195    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:50.735199    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:50.735202    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:50.736808    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:51.235103    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:51.235114    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:51.235118    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:51.235120    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:51.236309    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:51.735098    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:51.735112    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:51.735116    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:51.735119    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:51.736598    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:52.235093    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:52.235104    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:52.235109    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:52.235111    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:52.236547    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:52.236770    2850 node_ready.go:53] node "ha-881000" has status "Ready":"False"
	I0708 12:44:52.735055    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:52.735066    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:52.735071    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:52.735073    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:52.736570    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:53.235045    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:53.235062    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:53.235066    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:53.235069    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:53.236349    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:53.735079    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:53.735097    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:53.735102    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:53.735105    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:53.736701    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:54.235019    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:54.235031    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:54.235036    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:54.235037    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:54.235970    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:44:54.735046    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:54.735062    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:54.735066    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:54.735068    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:54.736566    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:54.736815    2850 node_ready.go:53] node "ha-881000" has status "Ready":"False"
	I0708 12:44:55.235012    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:55.235022    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:55.235025    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:55.235027    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:55.236372    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:55.735033    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:55.735049    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:55.735056    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:55.735059    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:55.736673    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:56.234979    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:56.234992    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:56.234995    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:56.234998    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:56.235922    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:44:56.236329    2850 node_ready.go:49] node "ha-881000" has status "Ready":"True"
	I0708 12:44:56.236344    2850 node_ready.go:38] duration metric: took 37.503461958s for node "ha-881000" to be "Ready" ...
	I0708 12:44:56.236348    2850 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 12:44:56.236370    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0708 12:44:56.236374    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:56.236377    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:56.236381    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:56.237564    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:56.239472    2850 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2646x" in "kube-system" namespace to be "Ready" ...
	I0708 12:44:56.239501    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:44:56.239505    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:56.239509    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:56.239511    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:56.240195    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:44:56.240468    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:56.240474    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:56.240477    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:56.240479    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:56.241124    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:44:56.741438    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:44:56.741470    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:56.741477    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:56.741479    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:56.742848    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:56.743195    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:56.743203    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:56.743206    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:56.743208    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:56.743986    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:44:57.241576    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:44:57.241586    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:57.241590    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:57.241591    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:57.242904    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:57.243256    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:57.243260    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:57.243263    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:57.243266    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:57.244066    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:44:57.740852    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:44:57.740873    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:57.740879    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:57.740882    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:57.742355    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:57.742704    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:57.742711    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:57.742713    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:57.742715    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:57.743435    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:44:58.241528    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:44:58.241540    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:58.241543    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:58.241546    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:58.242831    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:58.243203    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:58.243210    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:58.243213    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:58.243216    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:58.244052    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:44:58.244331    2850 pod_ready.go:102] pod "coredns-7db6d8ff4d-2646x" in "kube-system" namespace has status "Ready":"False"
	I0708 12:44:58.741564    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:44:58.741581    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:58.741585    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:58.741587    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:58.743058    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:58.743429    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:58.743436    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:58.743439    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:58.743448    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:58.744232    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:44:59.241527    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:44:59.241554    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:59.241558    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:59.241561    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:59.243100    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:59.243470    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:59.243475    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:59.243479    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:59.243480    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:59.244243    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:44:59.741559    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:44:59.741574    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:59.741581    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:59.741590    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:59.743220    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:59.743604    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:59.743609    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:59.743612    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:59.743616    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:59.744456    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:00.241503    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:00.241514    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:00.241519    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:00.241521    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:00.242908    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:00.243345    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:00.243349    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:00.243353    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:00.243355    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:00.244097    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:00.244429    2850 pod_ready.go:102] pod "coredns-7db6d8ff4d-2646x" in "kube-system" namespace has status "Ready":"False"
	I0708 12:45:00.741467    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:00.741474    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:00.741478    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:00.741481    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:00.742721    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:00.743036    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:00.743040    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:00.743043    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:00.743045    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:00.743830    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:01.241466    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:01.241480    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:01.241485    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:01.241487    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:01.242976    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:01.243375    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:01.243378    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:01.243381    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:01.243383    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:01.244246    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:01.741500    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:01.741510    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:01.741515    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:01.741517    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:01.742994    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:01.743311    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:01.743315    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:01.743317    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:01.743320    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:01.744315    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:02.241478    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:02.241493    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:02.241502    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:02.241504    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:02.243027    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:02.243340    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:02.243346    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:02.243349    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:02.243351    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:02.244132    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:02.244434    2850 pod_ready.go:102] pod "coredns-7db6d8ff4d-2646x" in "kube-system" namespace has status "Ready":"False"
	I0708 12:45:02.741253    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:02.741263    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:02.741267    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:02.741268    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:02.742891    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:02.743290    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:02.743299    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:02.743303    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:02.743305    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:02.744218    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:03.241431    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:03.241448    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:03.241451    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:03.241457    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:03.243109    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:03.243526    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:03.243530    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:03.243534    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:03.243539    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:03.244375    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:03.741453    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:03.741471    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:03.741475    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:03.741477    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:03.743217    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:03.743611    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:03.743617    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:03.743619    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:03.743622    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:03.744468    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:04.241377    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:04.241388    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:04.241398    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:04.241401    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:04.242470    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:04.242979    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:04.242986    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:04.242990    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:04.242991    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:04.243836    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:04.741446    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:04.741465    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:04.741471    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:04.741474    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:04.743225    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:04.743625    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:04.743630    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:04.743634    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:04.743636    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:04.744628    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:04.744826    2850 pod_ready.go:102] pod "coredns-7db6d8ff4d-2646x" in "kube-system" namespace has status "Ready":"False"
	I0708 12:45:05.241444    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:05.241474    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:05.241480    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:05.241482    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:05.243001    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:05.243367    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:05.243376    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:05.243380    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:05.243382    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:05.244265    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:05.741419    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:05.741440    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:05.741450    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:05.741453    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:05.742990    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:05.743242    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:05.743245    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:05.743248    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:05.743250    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:05.743991    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:06.241340    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:06.241346    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:06.241353    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:06.241355    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:06.242445    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:06.242855    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:06.242859    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:06.242863    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:06.242868    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:06.243590    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:06.741354    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:06.741367    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:06.741371    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:06.741373    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:06.742464    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:06.742740    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:06.742744    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:06.742747    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:06.742749    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:06.743410    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:07.241315    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:07.241326    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:07.241338    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:07.241341    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:07.242749    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:07.243179    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:07.243183    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:07.243187    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:07.243189    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:07.243964    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:07.244151    2850 pod_ready.go:102] pod "coredns-7db6d8ff4d-2646x" in "kube-system" namespace has status "Ready":"False"
	I0708 12:45:07.739557    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:07.739583    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:07.739588    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:07.739590    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:07.740828    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:07.741185    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:07.741191    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:07.741194    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:07.741196    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:07.741930    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:08.241296    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:08.241313    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:08.241320    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:08.241323    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:08.242645    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:08.243001    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:08.243005    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:08.243007    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:08.243009    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:08.243876    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:08.741300    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:08.741314    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:08.741318    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:08.741320    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:08.742719    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:08.743058    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:08.743065    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:08.743069    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:08.743072    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:08.743872    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:09.240641    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:09.240654    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:09.240659    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:09.240661    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:09.242233    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:09.242515    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:09.242522    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:09.242525    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:09.242528    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:09.243385    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:09.741272    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:09.741288    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:09.741292    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:09.741295    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:09.742893    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:09.743229    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:09.743234    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:09.743238    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:09.743241    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:09.744099    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:09.744363    2850 pod_ready.go:102] pod "coredns-7db6d8ff4d-2646x" in "kube-system" namespace has status "Ready":"False"
	I0708 12:45:10.241267    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:10.241278    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:10.241282    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:10.241284    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:10.242588    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:10.242866    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:10.242870    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:10.242873    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:10.242874    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:10.243647    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:10.741298    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:10.741315    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:10.741320    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:10.741322    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:10.742987    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:10.743392    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:10.743400    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:10.743404    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:10.743406    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:10.744188    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:11.241183    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:11.241194    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:11.241200    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:11.241206    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:11.242415    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:11.242712    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:11.242716    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:11.242720    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:11.242721    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:11.243473    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:11.741194    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:11.741205    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:11.741210    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:11.741215    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:11.742465    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:11.742775    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:11.742786    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:11.742788    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:11.742790    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:11.743587    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:12.241201    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:12.241213    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:12.241217    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:12.241219    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:12.242626    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:12.243039    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:12.243043    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:12.243047    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:12.243049    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:12.243917    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:12.244324    2850 pod_ready.go:102] pod "coredns-7db6d8ff4d-2646x" in "kube-system" namespace has status "Ready":"False"
	I0708 12:45:12.741218    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:12.741232    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:12.741236    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:12.741238    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:12.742818    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:12.743152    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:12.743159    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:12.743162    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:12.743165    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:12.744041    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:13.241184    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:13.241193    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:13.241197    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:13.241200    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:13.242516    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:13.242925    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:13.242931    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:13.242934    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:13.242937    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:13.243758    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:13.741196    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:13.741225    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:13.741230    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:13.741232    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:13.742856    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:13.743178    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:13.743186    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:13.743189    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:13.743192    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:13.743979    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:14.241154    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:14.241167    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:14.241171    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:14.241173    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:14.242419    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:14.242781    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:14.242785    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:14.242788    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:14.242790    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:14.243637    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:14.741167    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:14.741183    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:14.741187    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:14.741189    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:14.742829    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:14.743216    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:14.743220    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:14.743223    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:14.743225    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:14.744156    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:14.744560    2850 pod_ready.go:102] pod "coredns-7db6d8ff4d-2646x" in "kube-system" namespace has status "Ready":"False"
	I0708 12:45:15.241109    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:15.241121    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:15.241125    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:15.241127    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:15.242193    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:15.242557    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:15.242562    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:15.242564    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:15.242566    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:15.243353    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:15.740357    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:15.740371    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:15.740375    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:15.740377    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:15.741721    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:15.742101    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:15.742108    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:15.742111    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:15.742113    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:15.742969    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:16.241114    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:16.241124    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:16.241129    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:16.241132    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:16.242413    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:16.242874    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:16.242878    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:16.242881    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:16.242883    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:16.243725    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:16.740853    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:16.740868    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:16.740871    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:16.740873    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:16.742018    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:16.742317    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:16.742323    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:16.742327    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:16.742329    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:16.743247    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:17.241095    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:17.241105    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:17.241108    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:17.241110    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:17.242608    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:17.243027    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:17.243033    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:17.243037    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:17.243039    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:17.243877    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:17.244151    2850 pod_ready.go:102] pod "coredns-7db6d8ff4d-2646x" in "kube-system" namespace has status "Ready":"False"
	I0708 12:45:17.741110    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:17.741131    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:17.741139    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:17.741141    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:17.742557    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:17.742976    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:17.742980    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:17.742983    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:17.742986    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:17.743831    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:18.241102    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:18.241115    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:18.241120    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:18.241122    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:18.242443    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:18.242853    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:18.242860    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:18.242863    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:18.242865    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:18.243717    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:18.741041    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:18.741056    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:18.741061    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:18.741063    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:18.742516    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:18.742874    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:18.742878    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:18.742881    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:18.742883    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:18.743639    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:19.241031    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:19.241056    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:19.241066    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:19.241068    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:19.242350    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:19.242645    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:19.242654    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:19.242656    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:19.242658    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:19.243475    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:19.740005    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:19.740022    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:19.740027    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:19.740031    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:19.741418    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:19.741782    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:19.741786    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:19.741790    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:19.741792    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:19.742669    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:19.742907    2850 pod_ready.go:102] pod "coredns-7db6d8ff4d-2646x" in "kube-system" namespace has status "Ready":"False"
	I0708 12:45:20.241012    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:20.241026    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:20.241038    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:20.241041    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:20.242251    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:20.242636    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:20.242642    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:20.242645    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:20.242648    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:20.243540    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:20.741029    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:20.741049    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:20.741075    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:20.741079    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:20.742509    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:20.742986    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:20.742991    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:20.742994    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:20.742996    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:20.743926    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:20.744128    2850 pod_ready.go:92] pod "coredns-7db6d8ff4d-2646x" in "kube-system" namespace has status "Ready":"True"
	I0708 12:45:20.744136    2850 pod_ready.go:81] duration metric: took 24.50524075s for pod "coredns-7db6d8ff4d-2646x" in "kube-system" namespace to be "Ready" ...
	I0708 12:45:20.744143    2850 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rlj9v" in "kube-system" namespace to be "Ready" ...
	I0708 12:45:20.744168    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rlj9v
	I0708 12:45:20.744171    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:20.744175    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:20.744178    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:20.744921    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:20.745179    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:20.745186    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:20.745189    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:20.745191    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:20.746038    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:20.746264    2850 pod_ready.go:92] pod "coredns-7db6d8ff4d-rlj9v" in "kube-system" namespace has status "Ready":"True"
	I0708 12:45:20.746269    2850 pod_ready.go:81] duration metric: took 2.122458ms for pod "coredns-7db6d8ff4d-rlj9v" in "kube-system" namespace to be "Ready" ...
	I0708 12:45:20.746273    2850 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-881000" in "kube-system" namespace to be "Ready" ...
	I0708 12:45:20.746294    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-881000
	I0708 12:45:20.746297    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:20.746302    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:20.746305    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:20.747068    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:20.747506    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:20.747511    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:20.747513    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:20.747516    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:20.748146    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:20.748358    2850 pod_ready.go:92] pod "etcd-ha-881000" in "kube-system" namespace has status "Ready":"True"
	I0708 12:45:20.748364    2850 pod_ready.go:81] duration metric: took 2.08775ms for pod "etcd-ha-881000" in "kube-system" namespace to be "Ready" ...
	I0708 12:45:20.748368    2850 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-881000" in "kube-system" namespace to be "Ready" ...
	I0708 12:45:20.748384    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-881000
	I0708 12:45:20.748387    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:20.748399    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:20.748402    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:20.749140    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:20.749502    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:20.749509    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:20.749512    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:20.749514    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:20.750156    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:20.750372    2850 pod_ready.go:92] pod "kube-apiserver-ha-881000" in "kube-system" namespace has status "Ready":"True"
	I0708 12:45:20.750377    2850 pod_ready.go:81] duration metric: took 2.005875ms for pod "kube-apiserver-ha-881000" in "kube-system" namespace to be "Ready" ...
	I0708 12:45:20.750381    2850 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-881000" in "kube-system" namespace to be "Ready" ...
	I0708 12:45:20.750401    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-881000
	I0708 12:45:20.750405    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:20.750408    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:20.750411    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:20.751149    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:20.751437    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:20.751443    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:20.751445    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:20.751448    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:20.752108    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:20.752420    2850 pod_ready.go:92] pod "kube-controller-manager-ha-881000" in "kube-system" namespace has status "Ready":"True"
	I0708 12:45:20.752423    2850 pod_ready.go:81] duration metric: took 2.038708ms for pod "kube-controller-manager-ha-881000" in "kube-system" namespace to be "Ready" ...
	I0708 12:45:20.752427    2850 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nqzkk" in "kube-system" namespace to be "Ready" ...
	I0708 12:45:20.943042    2850 request.go:629] Waited for 190.595625ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nqzkk
	I0708 12:45:20.943064    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nqzkk
	I0708 12:45:20.943068    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:20.943071    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:20.943073    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:20.944212    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:21.143013    2850 request.go:629] Waited for 198.567333ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:21.143041    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:21.143044    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:21.143048    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:21.143052    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:21.144354    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:21.144551    2850 pod_ready.go:92] pod "kube-proxy-nqzkk" in "kube-system" namespace has status "Ready":"True"
	I0708 12:45:21.144558    2850 pod_ready.go:81] duration metric: took 392.136375ms for pod "kube-proxy-nqzkk" in "kube-system" namespace to be "Ready" ...
	I0708 12:45:21.144563    2850 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-881000" in "kube-system" namespace to be "Ready" ...
	I0708 12:45:21.342994    2850 request.go:629] Waited for 198.415125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-881000
	I0708 12:45:21.343019    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-881000
	I0708 12:45:21.343023    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:21.343026    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:21.343029    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:21.344100    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:21.543018    2850 request.go:629] Waited for 198.71175ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:21.543057    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:21.543060    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:21.543064    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:21.543066    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:21.544345    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:21.544618    2850 pod_ready.go:92] pod "kube-scheduler-ha-881000" in "kube-system" namespace has status "Ready":"True"
	I0708 12:45:21.544628    2850 pod_ready.go:81] duration metric: took 400.0705ms for pod "kube-scheduler-ha-881000" in "kube-system" namespace to be "Ready" ...
	I0708 12:45:21.544636    2850 pod_ready.go:38] duration metric: took 25.308887292s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 12:45:21.544648    2850 api_server.go:52] waiting for apiserver process to appear ...
	I0708 12:45:21.544775    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 12:45:21.552615    2850 logs.go:276] 2 containers: [5c7a6d2a7b0f db173c1aa7e6]
	I0708 12:45:21.552684    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 12:45:21.558679    2850 logs.go:276] 2 containers: [8949c5b568b1 5c4705f221f3]
	I0708 12:45:21.558737    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 12:45:21.564299    2850 logs.go:276] 4 containers: [6c32e54a9067 a01fbba041f3 57f745d9e2f1 e5decdf53e42]
	I0708 12:45:21.564354    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 12:45:21.569602    2850 logs.go:276] 2 containers: [6302ef35341b ed9f0e91126a]
	I0708 12:45:21.569653    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 12:45:21.575032    2850 logs.go:276] 2 containers: [6f04b4be84c2 e3b0434a308b]
	I0708 12:45:21.575087    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 12:45:21.580589    2850 logs.go:276] 2 containers: [493877591d89 cc323cbcdc6d]
	I0708 12:45:21.580646    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 12:45:21.586131    2850 logs.go:276] 2 containers: [f18946e45a94 8c20b27d4019]
	I0708 12:45:21.586187    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 12:45:21.591513    2850 logs.go:276] 2 containers: [f496d2b5c569 b545f59f90f8]
	I0708 12:45:21.591526    2850 logs.go:123] Gathering logs for kube-controller-manager [cc323cbcdc6d] ...
	I0708 12:45:21.591533    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc323cbcdc6d"
	I0708 12:45:21.607202    2850 logs.go:123] Gathering logs for container status ...
	I0708 12:45:21.607214    2850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 12:45:21.624879    2850 logs.go:123] Gathering logs for kube-apiserver [5c7a6d2a7b0f] ...
	I0708 12:45:21.624889    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c7a6d2a7b0f"
	I0708 12:45:21.635663    2850 logs.go:123] Gathering logs for coredns [6c32e54a9067] ...
	I0708 12:45:21.635673    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c32e54a9067"
	I0708 12:45:21.642127    2850 logs.go:123] Gathering logs for coredns [57f745d9e2f1] ...
	I0708 12:45:21.642136    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57f745d9e2f1"
	I0708 12:45:21.648280    2850 logs.go:123] Gathering logs for coredns [e5decdf53e42] ...
	I0708 12:45:21.648288    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5decdf53e42"
	I0708 12:45:21.655392    2850 logs.go:123] Gathering logs for kube-scheduler [6302ef35341b] ...
	I0708 12:45:21.655399    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6302ef35341b"
	I0708 12:45:21.662654    2850 logs.go:123] Gathering logs for kube-proxy [6f04b4be84c2] ...
	I0708 12:45:21.662665    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f04b4be84c2"
	I0708 12:45:21.675745    2850 logs.go:123] Gathering logs for describe nodes ...
	I0708 12:45:21.675752    2850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 12:45:21.723651    2850 logs.go:123] Gathering logs for kube-proxy [e3b0434a308b] ...
	I0708 12:45:21.723664    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3b0434a308b"
	I0708 12:45:21.731559    2850 logs.go:123] Gathering logs for kindnet [8c20b27d4019] ...
	I0708 12:45:21.731568    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c20b27d4019"
	I0708 12:45:21.738168    2850 logs.go:123] Gathering logs for storage-provisioner [b545f59f90f8] ...
	I0708 12:45:21.738179    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b545f59f90f8"
	I0708 12:45:21.744758    2850 logs.go:123] Gathering logs for storage-provisioner [f496d2b5c569] ...
	I0708 12:45:21.744768    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f496d2b5c569"
	I0708 12:45:21.751407    2850 logs.go:123] Gathering logs for Docker ...
	I0708 12:45:21.751417    2850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 12:45:21.772552    2850 logs.go:123] Gathering logs for kubelet ...
	I0708 12:45:21.772559    2850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 12:45:21.798834    2850 logs.go:123] Gathering logs for dmesg ...
	I0708 12:45:21.798844    2850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 12:45:21.803479    2850 logs.go:123] Gathering logs for kube-apiserver [db173c1aa7e6] ...
	I0708 12:45:21.803488    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db173c1aa7e6"
	I0708 12:45:21.825761    2850 logs.go:123] Gathering logs for etcd [5c4705f221f3] ...
	I0708 12:45:21.825770    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4705f221f3"
	I0708 12:45:21.836486    2850 logs.go:123] Gathering logs for kube-scheduler [ed9f0e91126a] ...
	I0708 12:45:21.836493    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9f0e91126a"
	I0708 12:45:21.848974    2850 logs.go:123] Gathering logs for kindnet [f18946e45a94] ...
	I0708 12:45:21.848986    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f18946e45a94"
	I0708 12:45:21.856358    2850 logs.go:123] Gathering logs for etcd [8949c5b568b1] ...
	I0708 12:45:21.856367    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8949c5b568b1"
	I0708 12:45:21.865831    2850 logs.go:123] Gathering logs for coredns [a01fbba041f3] ...
	I0708 12:45:21.865838    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a01fbba041f3"
	I0708 12:45:21.872089    2850 logs.go:123] Gathering logs for kube-controller-manager [493877591d89] ...
	I0708 12:45:21.872098    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493877591d89"
	I0708 12:45:24.387307    2850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 12:45:24.393583    2850 api_server.go:72] duration metric: took 1m5.787828625s to wait for apiserver process to appear ...
	I0708 12:45:24.393594    2850 api_server.go:88] waiting for apiserver healthz status ...
	I0708 12:45:24.393665    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 12:45:24.400221    2850 logs.go:276] 2 containers: [5c7a6d2a7b0f db173c1aa7e6]
	I0708 12:45:24.400296    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 12:45:24.405445    2850 logs.go:276] 2 containers: [8949c5b568b1 5c4705f221f3]
	I0708 12:45:24.405503    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 12:45:24.410777    2850 logs.go:276] 4 containers: [6c32e54a9067 a01fbba041f3 57f745d9e2f1 e5decdf53e42]
	I0708 12:45:24.410837    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 12:45:24.415915    2850 logs.go:276] 2 containers: [6302ef35341b ed9f0e91126a]
	I0708 12:45:24.415972    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 12:45:24.421293    2850 logs.go:276] 2 containers: [6f04b4be84c2 e3b0434a308b]
	I0708 12:45:24.421346    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 12:45:24.426972    2850 logs.go:276] 2 containers: [493877591d89 cc323cbcdc6d]
	I0708 12:45:24.427024    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 12:45:24.432727    2850 logs.go:276] 2 containers: [f18946e45a94 8c20b27d4019]
	I0708 12:45:24.432774    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 12:45:24.437926    2850 logs.go:276] 2 containers: [f496d2b5c569 b545f59f90f8]
	I0708 12:45:24.437937    2850 logs.go:123] Gathering logs for storage-provisioner [f496d2b5c569] ...
	I0708 12:45:24.437942    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f496d2b5c569"
	I0708 12:45:24.447768    2850 logs.go:123] Gathering logs for describe nodes ...
	I0708 12:45:24.447783    2850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 12:45:24.489600    2850 logs.go:123] Gathering logs for kube-apiserver [db173c1aa7e6] ...
	I0708 12:45:24.489610    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db173c1aa7e6"
	I0708 12:45:24.511141    2850 logs.go:123] Gathering logs for etcd [5c4705f221f3] ...
	I0708 12:45:24.511152    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4705f221f3"
	I0708 12:45:24.521818    2850 logs.go:123] Gathering logs for coredns [6c32e54a9067] ...
	I0708 12:45:24.521827    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c32e54a9067"
	I0708 12:45:24.528443    2850 logs.go:123] Gathering logs for coredns [e5decdf53e42] ...
	I0708 12:45:24.528453    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5decdf53e42"
	I0708 12:45:24.534731    2850 logs.go:123] Gathering logs for kube-proxy [e3b0434a308b] ...
	I0708 12:45:24.534740    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3b0434a308b"
	I0708 12:45:24.548886    2850 logs.go:123] Gathering logs for Docker ...
	I0708 12:45:24.548895    2850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 12:45:24.570100    2850 logs.go:123] Gathering logs for kube-apiserver [5c7a6d2a7b0f] ...
	I0708 12:45:24.570107    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c7a6d2a7b0f"
	I0708 12:45:24.581621    2850 logs.go:123] Gathering logs for coredns [a01fbba041f3] ...
	I0708 12:45:24.581630    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a01fbba041f3"
	I0708 12:45:24.588641    2850 logs.go:123] Gathering logs for kube-proxy [6f04b4be84c2] ...
	I0708 12:45:24.588650    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f04b4be84c2"
	I0708 12:45:24.599223    2850 logs.go:123] Gathering logs for kube-controller-manager [cc323cbcdc6d] ...
	I0708 12:45:24.599232    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc323cbcdc6d"
	I0708 12:45:24.613392    2850 logs.go:123] Gathering logs for kindnet [f18946e45a94] ...
	I0708 12:45:24.613401    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f18946e45a94"
	I0708 12:45:24.619751    2850 logs.go:123] Gathering logs for storage-provisioner [b545f59f90f8] ...
	I0708 12:45:24.619759    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b545f59f90f8"
	I0708 12:45:24.630591    2850 logs.go:123] Gathering logs for container status ...
	I0708 12:45:24.630599    2850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 12:45:24.647058    2850 logs.go:123] Gathering logs for kubelet ...
	I0708 12:45:24.647069    2850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 12:45:24.672902    2850 logs.go:123] Gathering logs for dmesg ...
	I0708 12:45:24.672910    2850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 12:45:24.677837    2850 logs.go:123] Gathering logs for etcd [8949c5b568b1] ...
	I0708 12:45:24.677844    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8949c5b568b1"
	I0708 12:45:24.687305    2850 logs.go:123] Gathering logs for coredns [57f745d9e2f1] ...
	I0708 12:45:24.687312    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57f745d9e2f1"
	I0708 12:45:24.697783    2850 logs.go:123] Gathering logs for kube-scheduler [6302ef35341b] ...
	I0708 12:45:24.697792    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6302ef35341b"
	I0708 12:45:24.704557    2850 logs.go:123] Gathering logs for kindnet [8c20b27d4019] ...
	I0708 12:45:24.704564    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c20b27d4019"
	I0708 12:45:24.711484    2850 logs.go:123] Gathering logs for kube-scheduler [ed9f0e91126a] ...
	I0708 12:45:24.711493    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9f0e91126a"
	I0708 12:45:24.720680    2850 logs.go:123] Gathering logs for kube-controller-manager [493877591d89] ...
	I0708 12:45:24.720687    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493877591d89"
	I0708 12:45:27.236929    2850 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I0708 12:45:27.240051    2850 api_server.go:279] https://192.168.105.5:8443/healthz returned 200:
	ok
	I0708 12:45:27.240085    2850 round_trippers.go:463] GET https://192.168.105.5:8443/version
	I0708 12:45:27.240088    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:27.240093    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:27.240096    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:27.240589    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:27.240629    2850 api_server.go:141] control plane version: v1.30.2
	I0708 12:45:27.240636    2850 api_server.go:131] duration metric: took 2.847107292s to wait for apiserver health ...
	I0708 12:45:27.240641    2850 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 12:45:27.240716    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 12:45:27.254338    2850 logs.go:276] 2 containers: [5c7a6d2a7b0f db173c1aa7e6]
	I0708 12:45:27.254406    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 12:45:27.260147    2850 logs.go:276] 2 containers: [8949c5b568b1 5c4705f221f3]
	I0708 12:45:27.260213    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 12:45:27.266086    2850 logs.go:276] 4 containers: [6c32e54a9067 a01fbba041f3 57f745d9e2f1 e5decdf53e42]
	I0708 12:45:27.266140    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 12:45:27.274117    2850 logs.go:276] 2 containers: [6302ef35341b ed9f0e91126a]
	I0708 12:45:27.274169    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 12:45:27.280079    2850 logs.go:276] 2 containers: [6f04b4be84c2 e3b0434a308b]
	I0708 12:45:27.280136    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 12:45:27.286177    2850 logs.go:276] 2 containers: [493877591d89 cc323cbcdc6d]
	I0708 12:45:27.286231    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 12:45:27.291733    2850 logs.go:276] 2 containers: [f18946e45a94 8c20b27d4019]
	I0708 12:45:27.291788    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 12:45:27.297265    2850 logs.go:276] 2 containers: [f496d2b5c569 b545f59f90f8]
	I0708 12:45:27.297282    2850 logs.go:123] Gathering logs for kindnet [f18946e45a94] ...
	I0708 12:45:27.297287    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f18946e45a94"
	I0708 12:45:27.304167    2850 logs.go:123] Gathering logs for kindnet [8c20b27d4019] ...
	I0708 12:45:27.304176    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c20b27d4019"
	I0708 12:45:27.310889    2850 logs.go:123] Gathering logs for kube-proxy [6f04b4be84c2] ...
	I0708 12:45:27.310897    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f04b4be84c2"
	I0708 12:45:27.317712    2850 logs.go:123] Gathering logs for storage-provisioner [f496d2b5c569] ...
	I0708 12:45:27.317720    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f496d2b5c569"
	I0708 12:45:27.323963    2850 logs.go:123] Gathering logs for storage-provisioner [b545f59f90f8] ...
	I0708 12:45:27.323970    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b545f59f90f8"
	I0708 12:45:27.330253    2850 logs.go:123] Gathering logs for describe nodes ...
	I0708 12:45:27.330266    2850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 12:45:27.372208    2850 logs.go:123] Gathering logs for etcd [5c4705f221f3] ...
	I0708 12:45:27.372219    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4705f221f3"
	I0708 12:45:27.382601    2850 logs.go:123] Gathering logs for coredns [6c32e54a9067] ...
	I0708 12:45:27.382609    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c32e54a9067"
	I0708 12:45:27.389288    2850 logs.go:123] Gathering logs for coredns [57f745d9e2f1] ...
	I0708 12:45:27.389296    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57f745d9e2f1"
	I0708 12:45:27.396159    2850 logs.go:123] Gathering logs for kube-scheduler [6302ef35341b] ...
	I0708 12:45:27.396166    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6302ef35341b"
	I0708 12:45:27.402921    2850 logs.go:123] Gathering logs for Docker ...
	I0708 12:45:27.402929    2850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 12:45:27.423728    2850 logs.go:123] Gathering logs for container status ...
	I0708 12:45:27.423736    2850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 12:45:27.440237    2850 logs.go:123] Gathering logs for kubelet ...
	I0708 12:45:27.440249    2850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 12:45:27.465291    2850 logs.go:123] Gathering logs for kube-apiserver [5c7a6d2a7b0f] ...
	I0708 12:45:27.465301    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c7a6d2a7b0f"
	I0708 12:45:27.476076    2850 logs.go:123] Gathering logs for coredns [a01fbba041f3] ...
	I0708 12:45:27.476087    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a01fbba041f3"
	I0708 12:45:27.482656    2850 logs.go:123] Gathering logs for kube-scheduler [ed9f0e91126a] ...
	I0708 12:45:27.482665    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9f0e91126a"
	I0708 12:45:27.492131    2850 logs.go:123] Gathering logs for kube-controller-manager [493877591d89] ...
	I0708 12:45:27.492138    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493877591d89"
	I0708 12:45:27.508284    2850 logs.go:123] Gathering logs for kube-controller-manager [cc323cbcdc6d] ...
	I0708 12:45:27.508292    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc323cbcdc6d"
	I0708 12:45:27.522615    2850 logs.go:123] Gathering logs for dmesg ...
	I0708 12:45:27.522624    2850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 12:45:27.527631    2850 logs.go:123] Gathering logs for kube-apiserver [db173c1aa7e6] ...
	I0708 12:45:27.527640    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db173c1aa7e6"
	I0708 12:45:27.549074    2850 logs.go:123] Gathering logs for etcd [8949c5b568b1] ...
	I0708 12:45:27.549082    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8949c5b568b1"
	I0708 12:45:27.559601    2850 logs.go:123] Gathering logs for coredns [e5decdf53e42] ...
	I0708 12:45:27.559613    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5decdf53e42"
	I0708 12:45:27.567251    2850 logs.go:123] Gathering logs for kube-proxy [e3b0434a308b] ...
	I0708 12:45:27.567261    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3b0434a308b"
	I0708 12:45:30.076023    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0708 12:45:30.076041    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:30.076045    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:30.076056    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:30.077861    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:30.079991    2850 system_pods.go:59] 9 kube-system pods found
	I0708 12:45:30.079999    2850 system_pods.go:61] "coredns-7db6d8ff4d-2646x" [5a1aa968-b181-4318-a7f2-fb0f94617bd5] Running
	I0708 12:45:30.080002    2850 system_pods.go:61] "coredns-7db6d8ff4d-rlj9v" [57423cc1-b13f-45c7-b2df-71621270a61f] Running
	I0708 12:45:30.080004    2850 system_pods.go:61] "etcd-ha-881000" [b905dbae-009a-44f3-87e4-756dfae87ce6] Running
	I0708 12:45:30.080005    2850 system_pods.go:61] "kindnet-mmchf" [2f8fecb7-8906-46c9-9d55-c56254b8b3d7] Running
	I0708 12:45:30.080007    2850 system_pods.go:61] "kube-apiserver-ha-881000" [ea5dbd32-5574-42d6-9efd-3956e499027a] Running
	I0708 12:45:30.080018    2850 system_pods.go:61] "kube-controller-manager-ha-881000" [3f0c772a-e298-47e5-a20d-4201060d8e09] Running
	I0708 12:45:30.080021    2850 system_pods.go:61] "kube-proxy-nqzkk" [0037978f-9b19-49c2-a0fd-a7757effb5e9] Running
	I0708 12:45:30.080023    2850 system_pods.go:61] "kube-scheduler-ha-881000" [03ce3397-c2e8-4b90-a33c-11fb0368a30e] Running
	I0708 12:45:30.080025    2850 system_pods.go:61] "storage-provisioner" [62d01d4e-c78c-499e-9905-7ff510f1edea] Running
	I0708 12:45:30.080029    2850 system_pods.go:74] duration metric: took 2.839449625s to wait for pod list to return data ...
	I0708 12:45:30.080034    2850 default_sa.go:34] waiting for default service account to be created ...
	I0708 12:45:30.080073    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/default/serviceaccounts
	I0708 12:45:30.080077    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:30.080080    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:30.080083    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:30.080889    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:30.081147    2850 default_sa.go:45] found service account: "default"
	I0708 12:45:30.081154    2850 default_sa.go:55] duration metric: took 1.117166ms for default service account to be created ...
	I0708 12:45:30.081158    2850 system_pods.go:116] waiting for k8s-apps to be running ...
	I0708 12:45:30.081179    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0708 12:45:30.081182    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:30.081186    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:30.081188    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:30.083151    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:30.084871    2850 system_pods.go:86] 9 kube-system pods found
	I0708 12:45:30.084879    2850 system_pods.go:89] "coredns-7db6d8ff4d-2646x" [5a1aa968-b181-4318-a7f2-fb0f94617bd5] Running
	I0708 12:45:30.084882    2850 system_pods.go:89] "coredns-7db6d8ff4d-rlj9v" [57423cc1-b13f-45c7-b2df-71621270a61f] Running
	I0708 12:45:30.084884    2850 system_pods.go:89] "etcd-ha-881000" [b905dbae-009a-44f3-87e4-756dfae87ce6] Running
	I0708 12:45:30.084886    2850 system_pods.go:89] "kindnet-mmchf" [2f8fecb7-8906-46c9-9d55-c56254b8b3d7] Running
	I0708 12:45:30.084888    2850 system_pods.go:89] "kube-apiserver-ha-881000" [ea5dbd32-5574-42d6-9efd-3956e499027a] Running
	I0708 12:45:30.084890    2850 system_pods.go:89] "kube-controller-manager-ha-881000" [3f0c772a-e298-47e5-a20d-4201060d8e09] Running
	I0708 12:45:30.084903    2850 system_pods.go:89] "kube-proxy-nqzkk" [0037978f-9b19-49c2-a0fd-a7757effb5e9] Running
	I0708 12:45:30.084907    2850 system_pods.go:89] "kube-scheduler-ha-881000" [03ce3397-c2e8-4b90-a33c-11fb0368a30e] Running
	I0708 12:45:30.084909    2850 system_pods.go:89] "storage-provisioner" [62d01d4e-c78c-499e-9905-7ff510f1edea] Running
	I0708 12:45:30.084912    2850 system_pods.go:126] duration metric: took 3.7505ms to wait for k8s-apps to be running ...
	I0708 12:45:30.084917    2850 system_svc.go:44] waiting for kubelet service to be running ....
	I0708 12:45:30.084981    2850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 12:45:30.090942    2850 system_svc.go:56] duration metric: took 6.022875ms WaitForService to wait for kubelet
	I0708 12:45:30.090950    2850 kubeadm.go:576] duration metric: took 1m11.485335084s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 12:45:30.090960    2850 node_conditions.go:102] verifying NodePressure condition ...
	I0708 12:45:30.090991    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes
	I0708 12:45:30.090994    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:30.090998    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:30.091001    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:30.092084    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:30.092353    2850 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 12:45:30.092360    2850 node_conditions.go:123] node cpu capacity is 2
	I0708 12:45:30.092366    2850 node_conditions.go:105] duration metric: took 1.40375ms to run NodePressure ...
	I0708 12:45:30.092373    2850 start.go:240] waiting for startup goroutines ...
	I0708 12:45:30.092377    2850 start.go:245] waiting for cluster config update ...
	I0708 12:45:30.092383    2850 start.go:254] writing updated cluster config ...
	I0708 12:45:30.092691    2850 ssh_runner.go:195] Run: rm -f paused
	I0708 12:45:30.122217    2850 start.go:600] kubectl: 1.29.2, cluster: 1.30.2 (minor skew: 1)
	I0708 12:45:30.126527    2850 out.go:177] * Done! kubectl is now configured to use "ha-881000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jul 08 19:44:47 ha-881000 dockerd[896]: time="2024-07-08T19:44:47.292105245Z" level=info msg="shim disconnected" id=b545f59f90f80f0cdf0042b37be15da16017501ae82b914b769f62ea576231fa namespace=moby
	Jul 08 19:44:47 ha-881000 dockerd[896]: time="2024-07-08T19:44:47.292246613Z" level=warning msg="cleaning up after shim disconnected" id=b545f59f90f80f0cdf0042b37be15da16017501ae82b914b769f62ea576231fa namespace=moby
	Jul 08 19:44:47 ha-881000 dockerd[896]: time="2024-07-08T19:44:47.292267079Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 08 19:45:13 ha-881000 dockerd[896]: time="2024-07-08T19:45:13.126056937Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 08 19:45:13 ha-881000 dockerd[896]: time="2024-07-08T19:45:13.126118427Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 08 19:45:13 ha-881000 dockerd[896]: time="2024-07-08T19:45:13.126127139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:45:13 ha-881000 dockerd[896]: time="2024-07-08T19:45:13.126182709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:45:19 ha-881000 dockerd[896]: time="2024-07-08T19:45:19.940610548Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 08 19:45:19 ha-881000 dockerd[896]: time="2024-07-08T19:45:19.940668194Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 08 19:45:19 ha-881000 dockerd[896]: time="2024-07-08T19:45:19.940674196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:45:19 ha-881000 dockerd[896]: time="2024-07-08T19:45:19.940701706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:45:19 ha-881000 dockerd[896]: time="2024-07-08T19:45:19.943181601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 08 19:45:19 ha-881000 dockerd[896]: time="2024-07-08T19:45:19.943203776Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 08 19:45:19 ha-881000 dockerd[896]: time="2024-07-08T19:45:19.943208486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:45:19 ha-881000 dockerd[896]: time="2024-07-08T19:45:19.943233495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:45:20 ha-881000 cri-dockerd[1141]: time="2024-07-08T19:45:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/23677c502488831db518edb8bbdf324cf64b638d6fe121190bb059ceb940138a/resolv.conf as [nameserver 192.168.105.1]"
	Jul 08 19:45:20 ha-881000 cri-dockerd[1141]: time="2024-07-08T19:45:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7011510af1b75082a5739ec139795a85e75ff4c104b475ff6052b64c891ac506/resolv.conf as [nameserver 192.168.105.1]"
	Jul 08 19:45:20 ha-881000 dockerd[896]: time="2024-07-08T19:45:20.045893426Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 08 19:45:20 ha-881000 dockerd[896]: time="2024-07-08T19:45:20.045938775Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 08 19:45:20 ha-881000 dockerd[896]: time="2024-07-08T19:45:20.045946903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:45:20 ha-881000 dockerd[896]: time="2024-07-08T19:45:20.045977830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:45:20 ha-881000 dockerd[896]: time="2024-07-08T19:45:20.047576210Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 08 19:45:20 ha-881000 dockerd[896]: time="2024-07-08T19:45:20.047653153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 08 19:45:20 ha-881000 dockerd[896]: time="2024-07-08T19:45:20.047679412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:45:20 ha-881000 dockerd[896]: time="2024-07-08T19:45:20.047758523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                      CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	6c32e54a90678       2437cf7621777                                                                              10 seconds ago       Running             coredns                   1                   7011510af1b75       coredns-7db6d8ff4d-2646x
	a01fbba041f3a       2437cf7621777                                                                              10 seconds ago       Running             coredns                   1                   23677c5024888       coredns-7db6d8ff4d-rlj9v
	f496d2b5c569e       ba04bb24b9575                                                                              17 seconds ago       Running             storage-provisioner       2                   dad919ff93745       storage-provisioner
	f18946e45a948       89d73d416b992                                                                              About a minute ago   Running             kindnet-cni               1                   099ead060d0cd       kindnet-mmchf
	b545f59f90f80       ba04bb24b9575                                                                              About a minute ago   Exited              storage-provisioner       1                   dad919ff93745       storage-provisioner
	6f04b4be84c25       66dbb96a9149f                                                                              About a minute ago   Running             kube-proxy                1                   28a3ff4318c5f       kube-proxy-nqzkk
	6302ef35341bd       c7dd04b1bafeb                                                                              About a minute ago   Running             kube-scheduler            1                   684b59b7d91d5       kube-scheduler-ha-881000
	8949c5b568b19       014faa467e297                                                                              About a minute ago   Running             etcd                      1                   16b5e2057f2c5       etcd-ha-881000
	493877591d899       e1dcc3400d3ea                                                                              About a minute ago   Running             kube-controller-manager   1                   3bd1107ec9cc2       kube-controller-manager-ha-881000
	5c7a6d2a7b0fa       84c601f3f72c8                                                                              About a minute ago   Running             kube-apiserver            1                   c7b8eee4b404a       kube-apiserver-ha-881000
	57f745d9e2f1c       2437cf7621777                                                                              About a minute ago   Exited              coredns                   0                   e337c3f92f0c7       coredns-7db6d8ff4d-rlj9v
	e5decdf53e42b       2437cf7621777                                                                              About a minute ago   Exited              coredns                   0                   1752461159c80       coredns-7db6d8ff4d-2646x
	8c20b27d40191       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8   2 minutes ago        Exited              kindnet-cni               0                   52b9dd42202b7       kindnet-mmchf
	e3b0434a308bd       66dbb96a9149f                                                                              2 minutes ago        Exited              kube-proxy                0                   f031f136a08f5       kube-proxy-nqzkk
	ed9f0e91126a2       c7dd04b1bafeb                                                                              2 minutes ago        Exited              kube-scheduler            0                   e9a1e4f9ec7d4       kube-scheduler-ha-881000
	5c4705f221f30       014faa467e297                                                                              2 minutes ago        Exited              etcd                      0                   59d4e027b0867       etcd-ha-881000
	db173c1aa7e67       84c601f3f72c8                                                                              2 minutes ago        Exited              kube-apiserver            0                   3994029f9ba47       kube-apiserver-ha-881000
	cc323cbcdc6df       e1dcc3400d3ea                                                                              2 minutes ago        Exited              kube-controller-manager   0                   109f63f7b1864       kube-controller-manager-ha-881000
	
	
	==> coredns [57f745d9e2f1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [6c32e54a9067] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	
	
	==> coredns [a01fbba041f3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	
	
	==> coredns [e5decdf53e42] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-881000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-881000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad
	                    minikube.k8s.io/name=ha-881000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_08T12_43_14_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jul 2024 19:43:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-881000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jul 2024 19:45:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jul 2024 19:44:56 +0000   Mon, 08 Jul 2024 19:43:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jul 2024 19:44:56 +0000   Mon, 08 Jul 2024 19:43:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jul 2024 19:44:56 +0000   Mon, 08 Jul 2024 19:43:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jul 2024 19:44:56 +0000   Mon, 08 Jul 2024 19:44:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.5
	  Hostname:    ha-881000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2147456Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2147456Ki
	  pods:               110
	System Info:
	  Machine ID:                 bcb7b02242954eb38ab118c97ee41a44
	  System UUID:                bcb7b02242954eb38ab118c97ee41a44
	  Boot ID:                    93e628f2-f162-4f4e-a0c0-1d052ecf72d3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-2646x             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m2s
	  kube-system                 coredns-7db6d8ff4d-rlj9v             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m2s
	  kube-system                 etcd-ha-881000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2m16s
	  kube-system                 kindnet-mmchf                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m2s
	  kube-system                 kube-apiserver-ha-881000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m17s
	  kube-system                 kube-controller-manager-ha-881000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m16s
	  kube-system                 kube-proxy-nqzkk                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m2s
	  kube-system                 kube-scheduler-ha-881000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m16s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m2s               kube-proxy       
	  Normal  Starting                 73s                kube-proxy       
	  Normal  NodeHasSufficientPID     2m16s              kubelet          Node ha-881000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m16s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m16s              kubelet          Node ha-881000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m16s              kubelet          Node ha-881000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m16s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m3s               node-controller  Node ha-881000 event: Registered Node ha-881000 in Controller
	  Normal  NodeReady                118s               kubelet          Node ha-881000 status is now: NodeReady
	  Normal  Starting                 77s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  77s (x8 over 77s)  kubelet          Node ha-881000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    77s (x8 over 77s)  kubelet          Node ha-881000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     77s (x7 over 77s)  kubelet          Node ha-881000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  77s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           62s                node-controller  Node ha-881000 event: Registered Node ha-881000 in Controller
	
	
	==> dmesg <==
	[Jul 8 19:43] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.651397] EINJ: EINJ table not found.
	[  +0.525130] systemd-fstab-generator[117]: Ignoring "noauto" option for root device
	[  +0.160244] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000399] platform regulatory.0: Falling back to sysfs fallback for: regulatory.db
	[Jul 8 19:44] systemd-fstab-generator[496]: Ignoring "noauto" option for root device
	[  +0.074941] systemd-fstab-generator[508]: Ignoring "noauto" option for root device
	[  +1.521517] systemd-fstab-generator[785]: Ignoring "noauto" option for root device
	[  +0.191000] systemd-fstab-generator[855]: Ignoring "noauto" option for root device
	[  +0.086566] systemd-fstab-generator[867]: Ignoring "noauto" option for root device
	[  +0.089024] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +2.317040] systemd-fstab-generator[1094]: Ignoring "noauto" option for root device
	[  +0.081076] systemd-fstab-generator[1106]: Ignoring "noauto" option for root device
	[  +0.072788] systemd-fstab-generator[1118]: Ignoring "noauto" option for root device
	[  +0.092507] systemd-fstab-generator[1133]: Ignoring "noauto" option for root device
	[  +0.198555] systemd-fstab-generator[1255]: Ignoring "noauto" option for root device
	[  +1.047479] systemd-fstab-generator[1388]: Ignoring "noauto" option for root device
	[  +0.036332] kauditd_printk_skb: 307 callbacks suppressed
	[  +5.497903] systemd-fstab-generator[2223]: Ignoring "noauto" option for root device
	[  +0.053664] kauditd_printk_skb: 122 callbacks suppressed
	[  +9.739594] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [5c4705f221f3] <==
	{"level":"info","ts":"2024-07-08T19:43:11.30621Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgVoteResp from 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2024-07-08T19:43:11.306222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became leader at term 2"}
	{"level":"info","ts":"2024-07-08T19:43:11.306227Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 58de0efec1d86300 elected leader 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2024-07-08T19:43:11.314087Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T19:43:11.319356Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"58de0efec1d86300","local-member-attributes":"{Name:ha-881000 ClientURLs:[https://192.168.105.5:2379]}","request-path":"/0/members/58de0efec1d86300/attributes","cluster-id":"cd5c0afff2184bea","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-08T19:43:11.321333Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T19:43:11.321365Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T19:43:11.321373Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T19:43:11.321377Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-08T19:43:11.321518Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-08T19:43:11.325963Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-08T19:43:11.326646Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.5:2379"}
	{"level":"info","ts":"2024-07-08T19:43:11.342065Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-08T19:43:11.342076Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-08T19:43:38.189684Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-08T19:43:38.189722Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"ha-881000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.5:2380"],"advertise-client-urls":["https://192.168.105.5:2379"]}
	{"level":"warn","ts":"2024-07-08T19:43:38.189779Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-08T19:43:38.189828Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	2024/07/08 19:43:38 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-08T19:43:38.204762Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-08T19:43:38.204785Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-08T19:43:38.204808Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"58de0efec1d86300","current-leader-member-id":"58de0efec1d86300"}
	{"level":"info","ts":"2024-07-08T19:43:38.205508Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2024-07-08T19:43:38.205571Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2024-07-08T19:43:38.205578Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-881000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.5:2380"],"advertise-client-urls":["https://192.168.105.5:2379"]}
	
	
	==> etcd [8949c5b568b1] <==
	{"level":"info","ts":"2024-07-08T19:44:13.795023Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-08T19:44:13.795042Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-08T19:44:13.795264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 switched to configuration voters=(6403572207504089856)"}
	{"level":"info","ts":"2024-07-08T19:44:13.795334Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","added-peer-id":"58de0efec1d86300","added-peer-peer-urls":["https://192.168.105.5:2380"]}
	{"level":"info","ts":"2024-07-08T19:44:13.795435Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T19:44:13.795474Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T19:44:13.79952Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-08T19:44:13.802552Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"58de0efec1d86300","initial-advertise-peer-urls":["https://192.168.105.5:2380"],"listen-peer-urls":["https://192.168.105.5:2380"],"advertise-client-urls":["https://192.168.105.5:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.5:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-08T19:44:13.802735Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2024-07-08T19:44:13.803917Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2024-07-08T19:44:13.803857Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-08T19:44:14.790815Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-08T19:44:14.790873Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-08T19:44:14.790887Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgPreVoteResp from 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2024-07-08T19:44:14.790898Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became candidate at term 3"}
	{"level":"info","ts":"2024-07-08T19:44:14.790903Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgVoteResp from 58de0efec1d86300 at term 3"}
	{"level":"info","ts":"2024-07-08T19:44:14.790911Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became leader at term 3"}
	{"level":"info","ts":"2024-07-08T19:44:14.790922Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 58de0efec1d86300 elected leader 58de0efec1d86300 at term 3"}
	{"level":"info","ts":"2024-07-08T19:44:14.791842Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"58de0efec1d86300","local-member-attributes":"{Name:ha-881000 ClientURLs:[https://192.168.105.5:2379]}","request-path":"/0/members/58de0efec1d86300/attributes","cluster-id":"cd5c0afff2184bea","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-08T19:44:14.791848Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-08T19:44:14.791947Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-08T19:44:14.792259Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-08T19:44:14.792275Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-08T19:44:14.794489Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.5:2379"}
	{"level":"info","ts":"2024-07-08T19:44:14.794501Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 19:45:30 up 1 min,  0 users,  load average: 0.52, 0.22, 0.08
	Linux ha-881000 5.10.207 #1 SMP PREEMPT Wed Jul 3 15:00:24 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8c20b27d4019] <==
	I0708 19:43:31.094017       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0708 19:43:31.094078       1 main.go:107] hostIP = 192.168.105.5
	podIP = 192.168.105.5
	I0708 19:43:31.094157       1 main.go:116] setting mtu 1500 for CNI 
	I0708 19:43:31.094166       1 main.go:146] kindnetd IP family: "ipv4"
	I0708 19:43:31.094171       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0708 19:43:31.198442       1 main.go:223] Handling node with IPs: map[192.168.105.5:{}]
	I0708 19:43:31.198484       1 main.go:227] handling current node
	
	
	==> kindnet [f18946e45a94] <==
	I0708 19:44:17.390067       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0708 19:44:17.390337       1 main.go:107] hostIP = 192.168.105.5
	podIP = 192.168.105.5
	I0708 19:44:17.390686       1 main.go:116] setting mtu 1500 for CNI 
	I0708 19:44:17.390726       1 main.go:146] kindnetd IP family: "ipv4"
	I0708 19:44:17.390750       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0708 19:44:47.514460       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0708 19:44:47.519866       1 main.go:223] Handling node with IPs: map[192.168.105.5:{}]
	I0708 19:44:47.519887       1 main.go:227] handling current node
	I0708 19:44:57.528696       1 main.go:223] Handling node with IPs: map[192.168.105.5:{}]
	I0708 19:44:57.528807       1 main.go:227] handling current node
	I0708 19:45:07.530350       1 main.go:223] Handling node with IPs: map[192.168.105.5:{}]
	I0708 19:45:07.530371       1 main.go:227] handling current node
	I0708 19:45:17.533059       1 main.go:223] Handling node with IPs: map[192.168.105.5:{}]
	I0708 19:45:17.533074       1 main.go:227] handling current node
	I0708 19:45:27.543017       1 main.go:223] Handling node with IPs: map[192.168.105.5:{}]
	I0708 19:45:27.543031       1 main.go:227] handling current node
	
	
	==> kube-apiserver [5c7a6d2a7b0f] <==
	I0708 19:44:15.329330       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0708 19:44:15.351148       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0708 19:44:15.351278       1 policy_source.go:224] refreshing policies
	I0708 19:44:15.385492       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0708 19:44:15.385534       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0708 19:44:15.385603       1 shared_informer.go:320] Caches are synced for configmaps
	I0708 19:44:15.385522       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0708 19:44:15.385542       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0708 19:44:15.385549       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0708 19:44:15.388521       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0708 19:44:15.392571       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0708 19:44:15.392584       1 aggregator.go:165] initial CRD sync complete...
	I0708 19:44:15.392587       1 autoregister_controller.go:141] Starting autoregister controller
	I0708 19:44:15.392589       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0708 19:44:15.392591       1 cache.go:39] Caches are synced for autoregister controller
	I0708 19:44:15.418906       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0708 19:44:16.286885       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0708 19:44:16.394124       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.105.5]
	I0708 19:44:16.394786       1 controller.go:615] quota admission added evaluator for: endpoints
	I0708 19:44:16.397137       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0708 19:44:16.833110       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0708 19:44:16.883448       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0708 19:44:16.887128       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0708 19:44:17.077901       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0708 19:44:17.079726       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [db173c1aa7e6] <==
	W0708 19:43:39.200114       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.200117       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.200128       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.200131       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.200141       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.200143       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.200157       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.200157       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.200171       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.200172       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.200185       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.200196       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.200204       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.200212       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.200218       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.201385       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.201401       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.201413       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.201424       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.201435       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.201448       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.201451       1 logging.go:59] [core] [Channel #13 SubChannel #15] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.201463       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.201466       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.201477       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [493877591d89] <==
	I0708 19:44:28.328303       1 shared_informer.go:320] Caches are synced for attach detach
	I0708 19:44:28.333697       1 shared_informer.go:320] Caches are synced for ephemeral
	I0708 19:44:28.336197       1 shared_informer.go:320] Caches are synced for GC
	I0708 19:44:28.339397       1 shared_informer.go:320] Caches are synced for HPA
	I0708 19:44:28.340511       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0708 19:44:28.401725       1 shared_informer.go:320] Caches are synced for resource quota
	I0708 19:44:28.413152       1 shared_informer.go:320] Caches are synced for disruption
	I0708 19:44:28.414270       1 shared_informer.go:320] Caches are synced for resource quota
	I0708 19:44:28.425464       1 shared_informer.go:320] Caches are synced for namespace
	I0708 19:44:28.441036       1 shared_informer.go:320] Caches are synced for service account
	I0708 19:44:28.539560       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0708 19:44:28.542690       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0708 19:44:28.542710       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0708 19:44:28.542727       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0708 19:44:28.542745       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0708 19:44:28.909273       1 shared_informer.go:320] Caches are synced for garbage collector
	I0708 19:44:28.931688       1 shared_informer.go:320] Caches are synced for garbage collector
	I0708 19:44:28.931710       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0708 19:44:58.276732       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0708 19:45:20.464652       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="32.178µs"
	I0708 19:45:20.470126       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="48.225µs"
	I0708 19:45:20.483242       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="5.994192ms"
	I0708 19:45:20.483830       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="20.841µs"
	I0708 19:45:20.488501       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="2.582432ms"
	I0708 19:45:20.488962       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="13.921µs"
	
	
	==> kube-controller-manager [cc323cbcdc6d] <==
	I0708 19:43:27.379516       1 shared_informer.go:320] Caches are synced for taint
	I0708 19:43:27.379569       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0708 19:43:27.379665       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-881000"
	I0708 19:43:27.379876       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0708 19:43:27.400488       1 shared_informer.go:320] Caches are synced for cronjob
	I0708 19:43:27.402642       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0708 19:43:27.449755       1 shared_informer.go:320] Caches are synced for disruption
	I0708 19:43:27.456110       1 shared_informer.go:320] Caches are synced for resource quota
	I0708 19:43:27.502148       1 shared_informer.go:320] Caches are synced for attach detach
	I0708 19:43:27.506149       1 shared_informer.go:320] Caches are synced for resource quota
	I0708 19:43:27.911596       1 shared_informer.go:320] Caches are synced for garbage collector
	I0708 19:43:27.957884       1 shared_informer.go:320] Caches are synced for garbage collector
	I0708 19:43:27.957934       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0708 19:43:28.425227       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="314.836166ms"
	I0708 19:43:28.435658       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="10.396584ms"
	I0708 19:43:28.435835       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="149.208µs"
	I0708 19:43:32.844754       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.079µs"
	I0708 19:43:32.851504       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="24.888µs"
	I0708 19:43:32.855122       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="19.561µs"
	I0708 19:43:34.205110       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="20.198µs"
	I0708 19:43:34.217813       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="4.734129ms"
	I0708 19:43:34.217858       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.281µs"
	I0708 19:43:34.230679       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="5.799989ms"
	I0708 19:43:34.230874       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="25.029µs"
	I0708 19:43:37.381649       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [6f04b4be84c2] <==
	I0708 19:44:17.369768       1 server_linux.go:69] "Using iptables proxy"
	I0708 19:44:17.378177       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.5"]
	I0708 19:44:17.395162       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0708 19:44:17.395183       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0708 19:44:17.395193       1 server_linux.go:165] "Using iptables Proxier"
	I0708 19:44:17.397365       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0708 19:44:17.397562       1 server.go:872] "Version info" version="v1.30.2"
	I0708 19:44:17.397573       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0708 19:44:17.398533       1 config.go:192] "Starting service config controller"
	I0708 19:44:17.398634       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0708 19:44:17.398654       1 config.go:101] "Starting endpoint slice config controller"
	I0708 19:44:17.398659       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0708 19:44:17.399167       1 config.go:319] "Starting node config controller"
	I0708 19:44:17.399179       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0708 19:44:17.499637       1 shared_informer.go:320] Caches are synced for node config
	I0708 19:44:17.499645       1 shared_informer.go:320] Caches are synced for service config
	I0708 19:44:17.499654       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [e3b0434a308b] <==
	I0708 19:43:28.503731       1 server_linux.go:69] "Using iptables proxy"
	I0708 19:43:28.508302       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.5"]
	I0708 19:43:28.516101       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0708 19:43:28.516115       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0708 19:43:28.516122       1 server_linux.go:165] "Using iptables Proxier"
	I0708 19:43:28.516705       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0708 19:43:28.516832       1 server.go:872] "Version info" version="v1.30.2"
	I0708 19:43:28.516838       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0708 19:43:28.517447       1 config.go:192] "Starting service config controller"
	I0708 19:43:28.517466       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0708 19:43:28.517525       1 config.go:101] "Starting endpoint slice config controller"
	I0708 19:43:28.517530       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0708 19:43:28.517796       1 config.go:319] "Starting node config controller"
	I0708 19:43:28.518198       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0708 19:43:28.618095       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0708 19:43:28.618123       1 shared_informer.go:320] Caches are synced for service config
	I0708 19:43:28.618242       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6302ef35341b] <==
	I0708 19:44:14.406492       1 serving.go:380] Generated self-signed cert in-memory
	W0708 19:44:15.312943       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0708 19:44:15.312960       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0708 19:44:15.312966       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0708 19:44:15.312969       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0708 19:44:15.339579       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0708 19:44:15.339594       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0708 19:44:15.341275       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0708 19:44:15.342938       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0708 19:44:15.342985       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0708 19:44:15.347033       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0708 19:44:15.448496       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [ed9f0e91126a] <==
	E0708 19:43:12.068934       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0708 19:43:12.068365       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0708 19:43:12.068955       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0708 19:43:12.068385       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0708 19:43:12.068976       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0708 19:43:12.068397       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0708 19:43:12.069003       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0708 19:43:12.068425       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0708 19:43:12.069013       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0708 19:43:12.068441       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0708 19:43:12.069033       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0708 19:43:12.068458       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0708 19:43:12.069050       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0708 19:43:12.068468       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0708 19:43:12.069087       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0708 19:43:12.068628       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0708 19:43:12.069141       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0708 19:43:12.068640       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0708 19:43:12.069171       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0708 19:43:12.978094       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0708 19:43:12.978251       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0708 19:43:12.987481       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0708 19:43:12.987495       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0708 19:43:13.665698       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0708 19:43:38.188521       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 08 19:44:44 ha-881000 kubelet[1395]: E0708 19:44:44.081841    1395 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-rlj9v" podUID="57423cc1-b13f-45c7-b2df-71621270a61f"
	Jul 08 19:44:46 ha-881000 kubelet[1395]: E0708 19:44:46.081392    1395 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-rlj9v" podUID="57423cc1-b13f-45c7-b2df-71621270a61f"
	Jul 08 19:44:46 ha-881000 kubelet[1395]: E0708 19:44:46.081437    1395 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-2646x" podUID="5a1aa968-b181-4318-a7f2-fb0f94617bd5"
	Jul 08 19:44:47 ha-881000 kubelet[1395]: I0708 19:44:47.321286    1395 scope.go:117] "RemoveContainer" containerID="0ae23ac6a69913979208465e09595f104e772632f3254444bde6cc9b187e4cc3"
	Jul 08 19:44:47 ha-881000 kubelet[1395]: I0708 19:44:47.321433    1395 scope.go:117] "RemoveContainer" containerID="b545f59f90f80f0cdf0042b37be15da16017501ae82b914b769f62ea576231fa"
	Jul 08 19:44:47 ha-881000 kubelet[1395]: E0708 19:44:47.321518    1395 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(62d01d4e-c78c-499e-9905-7ff510f1edea)\"" pod="kube-system/storage-provisioner" podUID="62d01d4e-c78c-499e-9905-7ff510f1edea"
	Jul 08 19:44:47 ha-881000 kubelet[1395]: E0708 19:44:47.701938    1395 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 08 19:44:47 ha-881000 kubelet[1395]: E0708 19:44:47.701966    1395 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 08 19:44:47 ha-881000 kubelet[1395]: E0708 19:44:47.701998    1395 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/57423cc1-b13f-45c7-b2df-71621270a61f-config-volume podName:57423cc1-b13f-45c7-b2df-71621270a61f nodeName:}" failed. No retries permitted until 2024-07-08 19:45:19.701983434 +0000 UTC m=+66.689156929 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/57423cc1-b13f-45c7-b2df-71621270a61f-config-volume") pod "coredns-7db6d8ff4d-rlj9v" (UID: "57423cc1-b13f-45c7-b2df-71621270a61f") : object "kube-system"/"coredns" not registered
	Jul 08 19:44:47 ha-881000 kubelet[1395]: E0708 19:44:47.702005    1395 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5a1aa968-b181-4318-a7f2-fb0f94617bd5-config-volume podName:5a1aa968-b181-4318-a7f2-fb0f94617bd5 nodeName:}" failed. No retries permitted until 2024-07-08 19:45:19.702001963 +0000 UTC m=+66.689175500 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5a1aa968-b181-4318-a7f2-fb0f94617bd5-config-volume") pod "coredns-7db6d8ff4d-2646x" (UID: "5a1aa968-b181-4318-a7f2-fb0f94617bd5") : object "kube-system"/"coredns" not registered
	Jul 08 19:44:48 ha-881000 kubelet[1395]: E0708 19:44:48.081943    1395 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-rlj9v" podUID="57423cc1-b13f-45c7-b2df-71621270a61f"
	Jul 08 19:44:48 ha-881000 kubelet[1395]: E0708 19:44:48.081967    1395 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-2646x" podUID="5a1aa968-b181-4318-a7f2-fb0f94617bd5"
	Jul 08 19:44:48 ha-881000 kubelet[1395]: E0708 19:44:48.123228    1395 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	Jul 08 19:44:50 ha-881000 kubelet[1395]: E0708 19:44:50.081719    1395 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-2646x" podUID="5a1aa968-b181-4318-a7f2-fb0f94617bd5"
	Jul 08 19:44:50 ha-881000 kubelet[1395]: E0708 19:44:50.081719    1395 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-rlj9v" podUID="57423cc1-b13f-45c7-b2df-71621270a61f"
	Jul 08 19:44:52 ha-881000 kubelet[1395]: E0708 19:44:52.083834    1395 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-2646x" podUID="5a1aa968-b181-4318-a7f2-fb0f94617bd5"
	Jul 08 19:44:52 ha-881000 kubelet[1395]: E0708 19:44:52.084071    1395 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-rlj9v" podUID="57423cc1-b13f-45c7-b2df-71621270a61f"
	Jul 08 19:45:02 ha-881000 kubelet[1395]: I0708 19:45:02.082351    1395 scope.go:117] "RemoveContainer" containerID="b545f59f90f80f0cdf0042b37be15da16017501ae82b914b769f62ea576231fa"
	Jul 08 19:45:02 ha-881000 kubelet[1395]: E0708 19:45:02.082661    1395 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(62d01d4e-c78c-499e-9905-7ff510f1edea)\"" pod="kube-system/storage-provisioner" podUID="62d01d4e-c78c-499e-9905-7ff510f1edea"
	Jul 08 19:45:13 ha-881000 kubelet[1395]: I0708 19:45:13.082459    1395 scope.go:117] "RemoveContainer" containerID="b545f59f90f80f0cdf0042b37be15da16017501ae82b914b769f62ea576231fa"
	Jul 08 19:45:13 ha-881000 kubelet[1395]: E0708 19:45:13.090432    1395 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 08 19:45:13 ha-881000 kubelet[1395]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 08 19:45:13 ha-881000 kubelet[1395]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 08 19:45:13 ha-881000 kubelet[1395]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 08 19:45:13 ha-881000 kubelet[1395]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [b545f59f90f8] <==
	I0708 19:44:17.284889       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0708 19:44:47.287330       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f496d2b5c569] <==
	I0708 19:45:13.154098       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0708 19:45:13.158957       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0708 19:45:13.159047       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0708 19:45:30.543816       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0708 19:45:30.544192       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-881000_48b3a288-eb17-458c-84d4-bbd1f4131e85!
	I0708 19:45:30.544592       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3bb7994d-1374-425c-b6a5-ded5a8749b0f", APIVersion:"v1", ResourceVersion:"633", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-881000_48b3a288-eb17-458c-84d4-bbd1f4131e85 became leader
	I0708 19:45:30.645581       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-881000_48b3a288-eb17-458c-84d4-bbd1f4131e85!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p ha-881000 -n ha-881000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-881000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (104.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-881000" in json of 'profile list' to have "Degraded" status but have "Running" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-881000\",\"Status\":\"Running\",\"Config\":{\"Name\":\"ha-881000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-881000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"KubernetesVersio
n\":\"v1.30.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":{\"default-storageclass\":true,\"storage-provisioner\":true},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"Sock
etVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 logs -n 25
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterClusterRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:39 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:39 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:39 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:39 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:39 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:39 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:39 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- exec  --             | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- exec  --             | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- exec  -- nslookup    | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o          | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| node    | add -p ha-881000 -v=7                | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-881000 node stop m02 -v=7         | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-881000 node start m02 -v=7        | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-881000 -v=7               | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:41 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| stop    | -p ha-881000 -v=7                    | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:41 PDT | 08 Jul 24 12:42 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| start   | -p ha-881000 --wait=true -v=7        | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:42 PDT | 08 Jul 24 12:43 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-881000                    | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:43 PDT |                     |
	| node    | ha-881000 node delete m03 -v=7       | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:43 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| stop    | ha-881000 stop -v=7                  | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:43 PDT | 08 Jul 24 12:43 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| start   | -p ha-881000 --wait=true             | ha-881000 | jenkins | v1.33.1 | 08 Jul 24 12:43 PDT | 08 Jul 24 12:45 PDT |
	|         | -v=7 --alsologtostderr               |           |         |         |                     |                     |
	|         | --driver=qemu2                       |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/08 12:43:47
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0708 12:43:47.203036    2850 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:43:47.203222    2850 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:43:47.203226    2850 out.go:304] Setting ErrFile to fd 2...
	I0708 12:43:47.203228    2850 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:43:47.203368    2850 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:43:47.204366    2850 out.go:298] Setting JSON to false
	I0708 12:43:47.220335    2850 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2595,"bootTime":1720465232,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0708 12:43:47.220396    2850 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0708 12:43:47.226067    2850 out.go:177] * [ha-881000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0708 12:43:47.233033    2850 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 12:43:47.233084    2850 notify.go:220] Checking for updates...
	I0708 12:43:47.239959    2850 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 12:43:47.242984    2850 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0708 12:43:47.246029    2850 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 12:43:47.248929    2850 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	I0708 12:43:47.252022    2850 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 12:43:47.255376    2850 config.go:182] Loaded profile config "ha-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:43:47.255631    2850 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 12:43:47.259894    2850 out.go:177] * Using the qemu2 driver based on existing profile
	I0708 12:43:47.266997    2850 start.go:297] selected driver: qemu2
	I0708 12:43:47.267005    2850 start.go:901] validating driver "qemu2" against &{Name:ha-881000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.2 ClusterName:ha-881000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 12:43:47.267055    2850 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 12:43:47.269481    2850 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 12:43:47.269519    2850 cni.go:84] Creating CNI manager for ""
	I0708 12:43:47.269525    2850 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0708 12:43:47.269569    2850 start.go:340] cluster config:
	{Name:ha-881000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-881000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 12:43:47.273287    2850 iso.go:125] acquiring lock: {Name:mk0270d312faa6a295feea241390baaf586d8510 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 12:43:47.281009    2850 out.go:177] * Starting "ha-881000" primary control-plane node in "ha-881000" cluster
	I0708 12:43:47.284840    2850 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0708 12:43:47.284857    2850 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0708 12:43:47.284864    2850 cache.go:56] Caching tarball of preloaded images
	I0708 12:43:47.284919    2850 preload.go:173] Found /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0708 12:43:47.284925    2850 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0708 12:43:47.284990    2850 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/config.json ...
	I0708 12:43:47.285421    2850 start.go:360] acquireMachinesLock for ha-881000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 12:43:47.285455    2850 start.go:364] duration metric: took 28.042µs to acquireMachinesLock for "ha-881000"
	I0708 12:43:47.285464    2850 start.go:96] Skipping create...Using existing machine configuration
	I0708 12:43:47.285472    2850 fix.go:54] fixHost starting: 
	I0708 12:43:47.285587    2850 fix.go:112] recreateIfNeeded on ha-881000: state=Stopped err=<nil>
	W0708 12:43:47.285596    2850 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 12:43:47.293796    2850 out.go:177] * Restarting existing qemu2 VM for "ha-881000" ...
	I0708 12:43:47.297910    2850 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:75:66:b4:8a:80 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/disk.qcow2
	I0708 12:43:47.338613    2850 main.go:141] libmachine: STDOUT: 
	I0708 12:43:47.338642    2850 main.go:141] libmachine: STDERR: 
	I0708 12:43:47.338646    2850 main.go:141] libmachine: Attempt 0
	I0708 12:43:47.338656    2850 main.go:141] libmachine: Searching for de:75:66:b4:8a:80 in /var/db/dhcpd_leases ...
	I0708 12:43:47.338720    2850 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0708 12:43:47.338738    2850 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:de:75:66:b4:8a:80 ID:1,de:75:66:b4:8a:80 Lease:0x668c4170}
	I0708 12:43:47.338745    2850 main.go:141] libmachine: Found match: de:75:66:b4:8a:80
	I0708 12:43:47.338752    2850 main.go:141] libmachine: IP: 192.168.105.5
	I0708 12:43:47.338756    2850 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.5)...
	I0708 12:44:06.880157    2850 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/config.json ...
	I0708 12:44:06.880902    2850 machine.go:94] provisionDockerMachine start ...
	I0708 12:44:06.881165    2850 main.go:141] libmachine: Using SSH client type: native
	I0708 12:44:06.881700    2850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10536a920] 0x10536d180 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0708 12:44:06.881713    2850 main.go:141] libmachine: About to run SSH command:
	hostname
	I0708 12:44:06.944368    2850 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0708 12:44:06.944389    2850 buildroot.go:166] provisioning hostname "ha-881000"
	I0708 12:44:06.944458    2850 main.go:141] libmachine: Using SSH client type: native
	I0708 12:44:06.944617    2850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10536a920] 0x10536d180 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0708 12:44:06.944624    2850 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-881000 && echo "ha-881000" | sudo tee /etc/hostname
	I0708 12:44:07.000524    2850 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-881000
	
	I0708 12:44:07.000567    2850 main.go:141] libmachine: Using SSH client type: native
	I0708 12:44:07.000687    2850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10536a920] 0x10536d180 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0708 12:44:07.000698    2850 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-881000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-881000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-881000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 12:44:07.049065    2850 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 12:44:07.049078    2850 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19195-1270/.minikube CaCertPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19195-1270/.minikube}
	I0708 12:44:07.049089    2850 buildroot.go:174] setting up certificates
	I0708 12:44:07.049097    2850 provision.go:84] configureAuth start
	I0708 12:44:07.049100    2850 provision.go:143] copyHostCerts
	I0708 12:44:07.049124    2850 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.pem
	I0708 12:44:07.049183    2850 exec_runner.go:144] found /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.pem, removing ...
	I0708 12:44:07.049188    2850 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.pem
	I0708 12:44:07.049594    2850 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.pem (1078 bytes)
	I0708 12:44:07.049759    2850 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19195-1270/.minikube/cert.pem
	I0708 12:44:07.049786    2850 exec_runner.go:144] found /Users/jenkins/minikube-integration/19195-1270/.minikube/cert.pem, removing ...
	I0708 12:44:07.049790    2850 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19195-1270/.minikube/cert.pem
	I0708 12:44:07.049854    2850 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19195-1270/.minikube/cert.pem (1123 bytes)
	I0708 12:44:07.049950    2850 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19195-1270/.minikube/key.pem
	I0708 12:44:07.049978    2850 exec_runner.go:144] found /Users/jenkins/minikube-integration/19195-1270/.minikube/key.pem, removing ...
	I0708 12:44:07.049982    2850 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19195-1270/.minikube/key.pem
	I0708 12:44:07.050038    2850 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19195-1270/.minikube/key.pem (1675 bytes)
	I0708 12:44:07.050147    2850 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca-key.pem org=jenkins.ha-881000 san=[127.0.0.1 192.168.105.5 ha-881000 localhost minikube]
	I0708 12:44:07.117812    2850 provision.go:177] copyRemoteCerts
	I0708 12:44:07.117840    2850 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 12:44:07.117846    2850 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/id_rsa Username:docker}
	I0708 12:44:07.141822    2850 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0708 12:44:07.141866    2850 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0708 12:44:07.149723    2850 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0708 12:44:07.149757    2850 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0708 12:44:07.157502    2850 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0708 12:44:07.157533    2850 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 12:44:07.165410    2850 provision.go:87] duration metric: took 116.310917ms to configureAuth
	I0708 12:44:07.165421    2850 buildroot.go:189] setting minikube options for container-runtime
	I0708 12:44:07.165537    2850 config.go:182] Loaded profile config "ha-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:44:07.165568    2850 main.go:141] libmachine: Using SSH client type: native
	I0708 12:44:07.165650    2850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10536a920] 0x10536d180 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0708 12:44:07.165656    2850 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0708 12:44:07.211537    2850 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0708 12:44:07.211545    2850 buildroot.go:70] root file system type: tmpfs
	I0708 12:44:07.211595    2850 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0708 12:44:07.211635    2850 main.go:141] libmachine: Using SSH client type: native
	I0708 12:44:07.211741    2850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10536a920] 0x10536d180 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0708 12:44:07.211773    2850 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0708 12:44:07.259492    2850 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0708 12:44:07.259544    2850 main.go:141] libmachine: Using SSH client type: native
	I0708 12:44:07.259647    2850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10536a920] 0x10536d180 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0708 12:44:07.259656    2850 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0708 12:44:08.699211    2850 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0708 12:44:08.699224    2850 machine.go:97] duration metric: took 1.818356625s to provisionDockerMachine
	I0708 12:44:08.699231    2850 start.go:293] postStartSetup for "ha-881000" (driver="qemu2")
	I0708 12:44:08.699237    2850 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 12:44:08.699306    2850 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 12:44:08.699315    2850 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/id_rsa Username:docker}
	I0708 12:44:08.724566    2850 ssh_runner.go:195] Run: cat /etc/os-release
	I0708 12:44:08.726115    2850 info.go:137] Remote host: Buildroot 2023.02.9
	I0708 12:44:08.726123    2850 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19195-1270/.minikube/addons for local assets ...
	I0708 12:44:08.726215    2850 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19195-1270/.minikube/files for local assets ...
	I0708 12:44:08.726342    2850 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem -> 17672.pem in /etc/ssl/certs
	I0708 12:44:08.726347    2850 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem -> /etc/ssl/certs/17672.pem
	I0708 12:44:08.726466    2850 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0708 12:44:08.729622    2850 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem --> /etc/ssl/certs/17672.pem (1708 bytes)
	I0708 12:44:08.737764    2850 start.go:296] duration metric: took 38.5285ms for postStartSetup
	I0708 12:44:08.737776    2850 fix.go:56] duration metric: took 21.452818125s for fixHost
	I0708 12:44:08.737808    2850 main.go:141] libmachine: Using SSH client type: native
	I0708 12:44:08.737906    2850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10536a920] 0x10536d180 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0708 12:44:08.737913    2850 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0708 12:44:08.782598    2850 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720467848.618609754
	
	I0708 12:44:08.782613    2850 fix.go:216] guest clock: 1720467848.618609754
	I0708 12:44:08.782617    2850 fix.go:229] Guest: 2024-07-08 12:44:08.618609754 -0700 PDT Remote: 2024-07-08 12:44:08.737777 -0700 PDT m=+21.554817334 (delta=-119.167246ms)
	I0708 12:44:08.782628    2850 fix.go:200] guest clock delta is within tolerance: -119.167246ms
	I0708 12:44:08.782631    2850 start.go:83] releasing machines lock for "ha-881000", held for 21.497684416s
	I0708 12:44:08.782905    2850 ssh_runner.go:195] Run: cat /version.json
	I0708 12:44:08.782909    2850 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0708 12:44:08.782912    2850 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/id_rsa Username:docker}
	I0708 12:44:08.782927    2850 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/id_rsa Username:docker}
	I0708 12:44:08.850123    2850 ssh_runner.go:195] Run: systemctl --version
	I0708 12:44:08.852483    2850 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0708 12:44:08.854488    2850 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0708 12:44:08.854513    2850 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0708 12:44:08.860424    2850 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0708 12:44:08.860432    2850 start.go:494] detecting cgroup driver to use...
	I0708 12:44:08.860498    2850 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 12:44:08.866999    2850 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0708 12:44:08.870556    2850 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0708 12:44:08.874028    2850 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0708 12:44:08.874056    2850 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0708 12:44:08.877532    2850 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0708 12:44:08.881087    2850 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0708 12:44:08.884937    2850 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0708 12:44:08.888816    2850 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0708 12:44:08.892627    2850 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0708 12:44:08.896492    2850 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0708 12:44:08.900576    2850 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0708 12:44:08.904522    2850 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 12:44:08.908642    2850 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 12:44:08.912271    2850 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 12:44:08.995342    2850 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0708 12:44:09.004364    2850 start.go:494] detecting cgroup driver to use...
	I0708 12:44:09.004428    2850 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0708 12:44:09.013252    2850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 12:44:09.020793    2850 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0708 12:44:09.031124    2850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 12:44:09.036852    2850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0708 12:44:09.042266    2850 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0708 12:44:09.088268    2850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0708 12:44:09.094797    2850 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 12:44:09.101383    2850 ssh_runner.go:195] Run: which cri-dockerd
	I0708 12:44:09.102715    2850 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0708 12:44:09.105836    2850 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0708 12:44:09.111803    2850 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0708 12:44:09.186218    2850 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0708 12:44:09.271456    2850 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0708 12:44:09.271510    2850 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0708 12:44:09.277562    2850 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 12:44:09.358997    2850 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0708 12:44:11.572555    2850 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.213594875s)
	I0708 12:44:11.572614    2850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0708 12:44:11.577969    2850 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0708 12:44:11.585005    2850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0708 12:44:11.590567    2850 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0708 12:44:11.674609    2850 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0708 12:44:11.758014    2850 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 12:44:11.829180    2850 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0708 12:44:11.835750    2850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0708 12:44:11.841363    2850 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 12:44:11.922218    2850 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0708 12:44:11.946744    2850 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0708 12:44:11.946808    2850 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0708 12:44:11.949414    2850 start.go:562] Will wait 60s for crictl version
	I0708 12:44:11.949450    2850 ssh_runner.go:195] Run: which crictl
	I0708 12:44:11.951025    2850 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0708 12:44:11.966187    2850 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0708 12:44:11.966254    2850 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0708 12:44:11.977143    2850 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0708 12:44:11.990230    2850 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0708 12:44:11.990352    2850 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0708 12:44:11.991832    2850 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 12:44:11.996404    2850 kubeadm.go:877] updating cluster {Name:ha-881000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 C
lusterName:ha-881000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0708 12:44:11.996453    2850 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0708 12:44:11.996490    2850 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0708 12:44:12.002536    2850 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	kindest/kindnetd:v20240513-cd2ac642
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0708 12:44:12.002544    2850 docker.go:615] Images already preloaded, skipping extraction
	I0708 12:44:12.002600    2850 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0708 12:44:12.008391    2850 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	kindest/kindnetd:v20240513-cd2ac642
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0708 12:44:12.008408    2850 cache_images.go:84] Images are preloaded, skipping loading
	I0708 12:44:12.008412    2850 kubeadm.go:928] updating node { 192.168.105.5 8443 v1.30.2 docker true true} ...
	I0708 12:44:12.008472    2850 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-881000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-881000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0708 12:44:12.008526    2850 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0708 12:44:12.016035    2850 cni.go:84] Creating CNI manager for ""
	I0708 12:44:12.016043    2850 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0708 12:44:12.016048    2850 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0708 12:44:12.016059    2850 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.5 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-881000 NodeName:ha-881000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0708 12:44:12.016115    2850 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-881000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0708 12:44:12.016169    2850 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0708 12:44:12.020558    2850 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 12:44:12.020590    2850 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0708 12:44:12.024214    2850 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0708 12:44:12.030381    2850 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 12:44:12.036219    2850 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0708 12:44:12.042370    2850 ssh_runner.go:195] Run: grep 192.168.105.5	control-plane.minikube.internal$ /etc/hosts
	I0708 12:44:12.043714    2850 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.5	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 12:44:12.048099    2850 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 12:44:12.123087    2850 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 12:44:12.130120    2850 certs.go:68] Setting up /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000 for IP: 192.168.105.5
	I0708 12:44:12.130127    2850 certs.go:194] generating shared ca certs ...
	I0708 12:44:12.130135    2850 certs.go:226] acquiring lock for ca certs: {Name:mka13b605a6983b2618b91f3a0bdec43c132a4e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:44:12.130297    2850 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.key
	I0708 12:44:12.130354    2850 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/proxy-client-ca.key
	I0708 12:44:12.130361    2850 certs.go:256] generating profile certs ...
	I0708 12:44:12.130430    2850 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/client.key
	I0708 12:44:12.130487    2850 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.key.174b6ad8
	I0708 12:44:12.130531    2850 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/proxy-client.key
	I0708 12:44:12.130540    2850 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0708 12:44:12.130552    2850 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0708 12:44:12.130563    2850 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0708 12:44:12.130574    2850 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0708 12:44:12.130584    2850 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0708 12:44:12.130604    2850 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0708 12:44:12.130622    2850 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0708 12:44:12.130633    2850 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0708 12:44:12.130700    2850 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/1767.pem (1338 bytes)
	W0708 12:44:12.130737    2850 certs.go:480] ignoring /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/1767_empty.pem, impossibly tiny 0 bytes
	I0708 12:44:12.130742    2850 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca-key.pem (1679 bytes)
	I0708 12:44:12.130763    2850 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem (1078 bytes)
	I0708 12:44:12.130783    2850 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem (1123 bytes)
	I0708 12:44:12.130805    2850 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/key.pem (1675 bytes)
	I0708 12:44:12.130843    2850 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem (1708 bytes)
	I0708 12:44:12.130869    2850 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem -> /usr/share/ca-certificates/17672.pem
	I0708 12:44:12.130881    2850 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0708 12:44:12.130891    2850 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/1767.pem -> /usr/share/ca-certificates/1767.pem
	I0708 12:44:12.131188    2850 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 12:44:12.143161    2850 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0708 12:44:12.155852    2850 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 12:44:12.168576    2850 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 12:44:12.179921    2850 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I0708 12:44:12.190763    2850 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0708 12:44:12.202701    2850 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 12:44:12.213932    2850 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0708 12:44:12.222730    2850 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem --> /usr/share/ca-certificates/17672.pem (1708 bytes)
	I0708 12:44:12.233466    2850 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 12:44:12.242648    2850 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/1767.pem --> /usr/share/ca-certificates/1767.pem (1338 bytes)
	I0708 12:44:12.252255    2850 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 12:44:12.258369    2850 ssh_runner.go:195] Run: openssl version
	I0708 12:44:12.260678    2850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17672.pem && ln -fs /usr/share/ca-certificates/17672.pem /etc/ssl/certs/17672.pem"
	I0708 12:44:12.264831    2850 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17672.pem
	I0708 12:44:12.266447    2850 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  8 19:34 /usr/share/ca-certificates/17672.pem
	I0708 12:44:12.266468    2850 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17672.pem
	I0708 12:44:12.268480    2850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17672.pem /etc/ssl/certs/3ec20f2e.0"
	I0708 12:44:12.272431    2850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 12:44:12.276375    2850 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 12:44:12.277931    2850 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 12:44:12.277953    2850 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 12:44:12.279904    2850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 12:44:12.283758    2850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1767.pem && ln -fs /usr/share/ca-certificates/1767.pem /etc/ssl/certs/1767.pem"
	I0708 12:44:12.287995    2850 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1767.pem
	I0708 12:44:12.289585    2850 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  8 19:34 /usr/share/ca-certificates/1767.pem
	I0708 12:44:12.289604    2850 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1767.pem
	I0708 12:44:12.291681    2850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1767.pem /etc/ssl/certs/51391683.0"
	I0708 12:44:12.295518    2850 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 12:44:12.297115    2850 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0708 12:44:12.299246    2850 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0708 12:44:12.301320    2850 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0708 12:44:12.303463    2850 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0708 12:44:12.305528    2850 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0708 12:44:12.307618    2850 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0708 12:44:12.309856    2850 kubeadm.go:391] StartCluster: {Name:ha-881000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clus
terName:ha-881000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 12:44:12.309921    2850 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0708 12:44:12.315094    2850 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0708 12:44:12.318646    2850 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0708 12:44:12.318652    2850 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0708 12:44:12.318654    2850 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0708 12:44:12.318674    2850 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0708 12:44:12.321937    2850 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0708 12:44:12.322222    2850 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-881000" does not appear in /Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 12:44:12.322273    2850 kubeconfig.go:62] /Users/jenkins/minikube-integration/19195-1270/kubeconfig needs updating (will repair): [kubeconfig missing "ha-881000" cluster setting kubeconfig missing "ha-881000" context setting]
	I0708 12:44:12.322403    2850 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/kubeconfig: {Name:mkd06393ca6fb9ad91b614216d70dbd8a552e45d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:44:12.322885    2850 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 12:44:12.323014    2850 kapi.go:59] client config for ha-881000: &rest.Config{Host:"https://192.168.105.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/client.key", CAFile:"/Users/jenkins/minikube-integration/19195-1270/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1066fb4f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0708 12:44:12.323226    2850 cert_rotation.go:137] Starting client certificate rotation controller
	I0708 12:44:12.323330    2850 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0708 12:44:12.326595    2850 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.105.5
	I0708 12:44:12.326611    2850 kubeadm.go:1154] stopping kube-system containers ...
	I0708 12:44:12.326653    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0708 12:44:12.333291    2850 docker.go:483] Stopping containers: [57f745d9e2f1 e5decdf53e42 0ae23ac6a699 e5df0a87fa90 e337c3f92f0c 1752461159c8 8c20b27d4019 e3b0434a308b 52b9dd42202b f031f136a08f ed9f0e91126a 5c4705f221f3 db173c1aa7e6 cc323cbcdc6d e9a1e4f9ec7d 109f63f7b186 59d4e027b086 3994029f9ba4]
	I0708 12:44:12.333349    2850 ssh_runner.go:195] Run: docker stop 57f745d9e2f1 e5decdf53e42 0ae23ac6a699 e5df0a87fa90 e337c3f92f0c 1752461159c8 8c20b27d4019 e3b0434a308b 52b9dd42202b f031f136a08f ed9f0e91126a 5c4705f221f3 db173c1aa7e6 cc323cbcdc6d e9a1e4f9ec7d 109f63f7b186 59d4e027b086 3994029f9ba4
	I0708 12:44:12.339775    2850 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0708 12:44:12.346385    2850 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 12:44:12.349708    2850 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 12:44:12.349715    2850 kubeadm.go:156] found existing configuration files:
	
	I0708 12:44:12.349734    2850 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0708 12:44:12.353095    2850 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 12:44:12.353120    2850 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 12:44:12.356614    2850 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0708 12:44:12.360206    2850 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 12:44:12.360235    2850 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 12:44:12.363635    2850 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0708 12:44:12.366686    2850 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 12:44:12.366710    2850 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 12:44:12.369771    2850 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0708 12:44:12.372940    2850 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 12:44:12.372971    2850 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 12:44:12.376521    2850 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 12:44:12.379929    2850 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 12:44:12.429414    2850 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 12:44:13.059616    2850 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0708 12:44:13.177332    2850 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 12:44:13.215247    2850 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0708 12:44:13.254347    2850 api_server.go:52] waiting for apiserver process to appear ...
	I0708 12:44:13.254455    2850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 12:44:13.756509    2850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 12:44:14.256490    2850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 12:44:14.261485    2850 api_server.go:72] duration metric: took 1.007164792s to wait for apiserver process to appear ...
	I0708 12:44:14.261494    2850 api_server.go:88] waiting for apiserver healthz status ...
	I0708 12:44:14.261503    2850 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I0708 12:44:15.464003    2850 api_server.go:279] https://192.168.105.5:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0708 12:44:15.464018    2850 api_server.go:103] status: https://192.168.105.5:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0708 12:44:15.464029    2850 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I0708 12:44:15.503680    2850 api_server.go:279] https://192.168.105.5:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 12:44:15.503696    2850 api_server.go:103] status: https://192.168.105.5:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 12:44:15.763559    2850 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I0708 12:44:15.766515    2850 api_server.go:279] https://192.168.105.5:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 12:44:15.766528    2850 api_server.go:103] status: https://192.168.105.5:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 12:44:16.263502    2850 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I0708 12:44:16.266091    2850 api_server.go:279] https://192.168.105.5:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0708 12:44:16.266101    2850 api_server.go:103] status: https://192.168.105.5:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0708 12:44:16.763535    2850 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I0708 12:44:16.766370    2850 api_server.go:279] https://192.168.105.5:8443/healthz returned 200:
	ok
	I0708 12:44:16.766408    2850 round_trippers.go:463] GET https://192.168.105.5:8443/version
	I0708 12:44:16.766412    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:16.766416    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:16.766419    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:16.770263    2850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 12:44:16.770305    2850 api_server.go:141] control plane version: v1.30.2
	I0708 12:44:16.770312    2850 api_server.go:131] duration metric: took 2.50887525s to wait for apiserver health ...
	I0708 12:44:16.770316    2850 cni.go:84] Creating CNI manager for ""
	I0708 12:44:16.770320    2850 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0708 12:44:16.774540    2850 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0708 12:44:16.778515    2850 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0708 12:44:16.780799    2850 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0708 12:44:16.780805    2850 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0708 12:44:16.787435    2850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0708 12:44:16.998658    2850 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 12:44:16.998773    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0708 12:44:16.998777    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:16.998782    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:16.998785    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:17.000294    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:17.004459    2850 system_pods.go:59] 9 kube-system pods found
	I0708 12:44:17.004471    2850 system_pods.go:61] "coredns-7db6d8ff4d-2646x" [5a1aa968-b181-4318-a7f2-fb0f94617bd5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 12:44:17.004474    2850 system_pods.go:61] "coredns-7db6d8ff4d-rlj9v" [57423cc1-b13f-45c7-b2df-71621270a61f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0708 12:44:17.004478    2850 system_pods.go:61] "etcd-ha-881000" [b905dbae-009a-44f3-87e4-756dfae87ce6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0708 12:44:17.004481    2850 system_pods.go:61] "kindnet-mmchf" [2f8fecb7-8906-46c9-9d55-c56254b8b3d7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0708 12:44:17.004483    2850 system_pods.go:61] "kube-apiserver-ha-881000" [ea5dbd32-5574-42d6-9efd-3956e499027a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0708 12:44:17.004487    2850 system_pods.go:61] "kube-controller-manager-ha-881000" [3f0c772a-e298-47e5-a20d-4201060d8e09] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0708 12:44:17.004489    2850 system_pods.go:61] "kube-proxy-nqzkk" [0037978f-9b19-49c2-a0fd-a7757effb5e9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0708 12:44:17.004501    2850 system_pods.go:61] "kube-scheduler-ha-881000" [03ce3397-c2e8-4b90-a33c-11fb0368a30e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0708 12:44:17.004505    2850 system_pods.go:61] "storage-provisioner" [62d01d4e-c78c-499e-9905-7ff510f1edea] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0708 12:44:17.004510    2850 system_pods.go:74] duration metric: took 5.838958ms to wait for pod list to return data ...
	I0708 12:44:17.004515    2850 node_conditions.go:102] verifying NodePressure condition ...
	I0708 12:44:17.004542    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes
	I0708 12:44:17.004545    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:17.004548    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:17.004550    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:17.005727    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:17.006038    2850 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 12:44:17.006044    2850 node_conditions.go:123] node cpu capacity is 2
	I0708 12:44:17.006051    2850 node_conditions.go:105] duration metric: took 1.533833ms to run NodePressure ...
	I0708 12:44:17.006057    2850 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 12:44:17.245923    2850 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0708 12:44:17.245984    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0708 12:44:17.245988    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:17.245991    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:17.245999    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:17.247183    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:17.248109    2850 kubeadm.go:733] kubelet initialised
	I0708 12:44:17.248118    2850 kubeadm.go:734] duration metric: took 2.183ms waiting for restarted kubelet to initialise ...
	I0708 12:44:17.248122    2850 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 12:44:17.248146    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0708 12:44:17.248150    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:17.248154    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:17.248157    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:17.249946    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:17.252016    2850 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-2646x" in "kube-system" namespace to be "Ready" ...
	I0708 12:44:17.252049    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:44:17.252052    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:17.252056    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:17.252058    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:17.252777    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:44:17.253056    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:17.253060    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:17.253064    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:17.253067    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:17.253789    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:44:17.254087    2850 pod_ready.go:97] node "ha-881000" hosting pod "coredns-7db6d8ff4d-2646x" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-881000" has status "Ready":"False"
	I0708 12:44:17.254093    2850 pod_ready.go:81] duration metric: took 2.068791ms for pod "coredns-7db6d8ff4d-2646x" in "kube-system" namespace to be "Ready" ...
	E0708 12:44:17.254098    2850 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-881000" hosting pod "coredns-7db6d8ff4d-2646x" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-881000" has status "Ready":"False"
	I0708 12:44:17.254101    2850 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-rlj9v" in "kube-system" namespace to be "Ready" ...
	I0708 12:44:17.254121    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rlj9v
	I0708 12:44:17.254124    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:17.254128    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:17.254130    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:17.254769    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:44:17.255058    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:17.255061    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:17.255064    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:17.255066    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:17.255634    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:44:17.255780    2850 pod_ready.go:97] node "ha-881000" hosting pod "coredns-7db6d8ff4d-rlj9v" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-881000" has status "Ready":"False"
	I0708 12:44:17.255786    2850 pod_ready.go:81] duration metric: took 1.681917ms for pod "coredns-7db6d8ff4d-rlj9v" in "kube-system" namespace to be "Ready" ...
	E0708 12:44:17.255789    2850 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-881000" hosting pod "coredns-7db6d8ff4d-rlj9v" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-881000" has status "Ready":"False"
	I0708 12:44:17.255791    2850 pod_ready.go:78] waiting up to 4m0s for pod "etcd-ha-881000" in "kube-system" namespace to be "Ready" ...
	I0708 12:44:17.255807    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-881000
	I0708 12:44:17.255810    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:17.255813    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:17.255815    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:17.256424    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:44:17.256669    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:17.256672    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:17.256675    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:17.256678    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:17.257307    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:44:17.257571    2850 pod_ready.go:97] node "ha-881000" hosting pod "etcd-ha-881000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-881000" has status "Ready":"False"
	I0708 12:44:17.257575    2850 pod_ready.go:81] duration metric: took 1.781792ms for pod "etcd-ha-881000" in "kube-system" namespace to be "Ready" ...
	E0708 12:44:17.257578    2850 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-881000" hosting pod "etcd-ha-881000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-881000" has status "Ready":"False"
	I0708 12:44:17.257583    2850 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-ha-881000" in "kube-system" namespace to be "Ready" ...
	I0708 12:44:17.257597    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-881000
	I0708 12:44:17.257599    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:17.257602    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:17.257605    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:17.258263    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:44:17.400775    2850 request.go:629] Waited for 142.183583ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:17.400814    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:17.400819    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:17.400823    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:17.400833    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:17.405958    2850 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0708 12:44:17.406293    2850 pod_ready.go:97] node "ha-881000" hosting pod "kube-apiserver-ha-881000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-881000" has status "Ready":"False"
	I0708 12:44:17.406306    2850 pod_ready.go:81] duration metric: took 148.723459ms for pod "kube-apiserver-ha-881000" in "kube-system" namespace to be "Ready" ...
	E0708 12:44:17.406313    2850 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-881000" hosting pod "kube-apiserver-ha-881000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-881000" has status "Ready":"False"
	I0708 12:44:17.406317    2850 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-ha-881000" in "kube-system" namespace to be "Ready" ...
	I0708 12:44:17.600801    2850 request.go:629] Waited for 194.4485ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-881000
	I0708 12:44:17.600827    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-881000
	I0708 12:44:17.600831    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:17.600835    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:17.600838    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:17.601946    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:17.799018    2850 request.go:629] Waited for 196.693583ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:17.799048    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:17.799051    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:17.799056    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:17.799058    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:17.799927    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:44:17.800125    2850 pod_ready.go:97] node "ha-881000" hosting pod "kube-controller-manager-ha-881000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-881000" has status "Ready":"False"
	I0708 12:44:17.800133    2850 pod_ready.go:81] duration metric: took 393.821667ms for pod "kube-controller-manager-ha-881000" in "kube-system" namespace to be "Ready" ...
	E0708 12:44:17.800141    2850 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-881000" hosting pod "kube-controller-manager-ha-881000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-881000" has status "Ready":"False"
	I0708 12:44:17.800145    2850 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nqzkk" in "kube-system" namespace to be "Ready" ...
	I0708 12:44:18.000719    2850 request.go:629] Waited for 200.550291ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nqzkk
	I0708 12:44:18.000760    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nqzkk
	I0708 12:44:18.000764    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:18.000767    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:18.000771    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:18.001795    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:18.200733    2850 request.go:629] Waited for 198.662625ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:18.200760    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:18.200763    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:18.200768    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:18.200771    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:18.201703    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:44:18.201906    2850 pod_ready.go:97] node "ha-881000" hosting pod "kube-proxy-nqzkk" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-881000" has status "Ready":"False"
	I0708 12:44:18.201917    2850 pod_ready.go:81] duration metric: took 401.777959ms for pod "kube-proxy-nqzkk" in "kube-system" namespace to be "Ready" ...
	E0708 12:44:18.201922    2850 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-881000" hosting pod "kube-proxy-nqzkk" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-881000" has status "Ready":"False"
	I0708 12:44:18.201926    2850 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-ha-881000" in "kube-system" namespace to be "Ready" ...
	I0708 12:44:18.400713    2850 request.go:629] Waited for 198.750875ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-881000
	I0708 12:44:18.400740    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-881000
	I0708 12:44:18.400743    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:18.400754    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:18.400770    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:18.401682    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:44:18.600687    2850 request.go:629] Waited for 198.768834ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:18.600709    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:18.600713    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:18.600717    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:18.600720    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:18.601755    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:18.601960    2850 pod_ready.go:97] node "ha-881000" hosting pod "kube-scheduler-ha-881000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-881000" has status "Ready":"False"
	I0708 12:44:18.601967    2850 pod_ready.go:81] duration metric: took 400.041458ms for pod "kube-scheduler-ha-881000" in "kube-system" namespace to be "Ready" ...
	E0708 12:44:18.601972    2850 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-881000" hosting pod "kube-scheduler-ha-881000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-881000" has status "Ready":"False"
	I0708 12:44:18.601976    2850 pod_ready.go:38] duration metric: took 1.353880375s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 12:44:18.601986    2850 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0708 12:44:18.606641    2850 ops.go:34] apiserver oom_adj: -16
	I0708 12:44:18.606648    2850 kubeadm.go:591] duration metric: took 6.288141125s to restartPrimaryControlPlane
	I0708 12:44:18.606652    2850 kubeadm.go:393] duration metric: took 6.296948166s to StartCluster
	I0708 12:44:18.606660    2850 settings.go:142] acquiring lock: {Name:mka0c397a57d617e1d77508d22cc3adb2edf5927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:44:18.606747    2850 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 12:44:18.607091    2850 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/kubeconfig: {Name:mkd06393ca6fb9ad91b614216d70dbd8a552e45d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:44:18.607314    2850 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 12:44:18.607389    2850 config.go:182] Loaded profile config "ha-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:44:18.607375    2850 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0708 12:44:18.607406    2850 addons.go:69] Setting storage-provisioner=true in profile "ha-881000"
	I0708 12:44:18.607418    2850 addons.go:234] Setting addon storage-provisioner=true in "ha-881000"
	W0708 12:44:18.607421    2850 addons.go:243] addon storage-provisioner should already be in state true
	I0708 12:44:18.607424    2850 addons.go:69] Setting default-storageclass=true in profile "ha-881000"
	I0708 12:44:18.607432    2850 host.go:66] Checking if "ha-881000" exists ...
	I0708 12:44:18.607437    2850 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-881000"
	I0708 12:44:18.608205    2850 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 12:44:18.608335    2850 kapi.go:59] client config for ha-881000: &rest.Config{Host:"https://192.168.105.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/client.key", CAFile:"/Users/jenkins/minikube-integration/19195-1270/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1066fb4f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0708 12:44:18.608457    2850 addons.go:234] Setting addon default-storageclass=true in "ha-881000"
	W0708 12:44:18.608462    2850 addons.go:243] addon default-storageclass should already be in state true
	I0708 12:44:18.608469    2850 host.go:66] Checking if "ha-881000" exists ...
	I0708 12:44:18.610726    2850 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0708 12:44:18.610731    2850 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0708 12:44:18.610737    2850 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/id_rsa Username:docker}
	I0708 12:44:18.614244    2850 out.go:177] * Verifying Kubernetes components...
	I0708 12:44:18.617317    2850 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 12:44:18.620243    2850 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 12:44:18.624383    2850 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 12:44:18.624391    2850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0708 12:44:18.624397    2850 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/id_rsa Username:docker}
	I0708 12:44:18.726309    2850 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 12:44:18.733759    2850 node_ready.go:35] waiting up to 6m0s for node "ha-881000" to be "Ready" ...
	I0708 12:44:18.735513    2850 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0708 12:44:18.745055    2850 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 12:44:18.799161    2850 request.go:629] Waited for 65.348458ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:18.799201    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:18.799204    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:18.799208    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:18.799210    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:18.799947    2850 round_trippers.go:463] GET https://192.168.105.5:8443/apis/storage.k8s.io/v1/storageclasses
	I0708 12:44:18.799951    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:18.799955    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:18.799957    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:18.800673    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:18.801351    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:18.801570    2850 round_trippers.go:463] PUT https://192.168.105.5:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0708 12:44:18.801577    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:18.801580    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:18.801582    2850 round_trippers.go:473]     Content-Type: application/json
	I0708 12:44:18.801585    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:18.802844    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:19.059311    2850 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0708 12:44:19.067209    2850 addons.go:510] duration metric: took 459.852666ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0708 12:44:19.235844    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:19.235851    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:19.235856    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:19.235859    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:19.236902    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:19.735892    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:19.735912    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:19.735916    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:19.735919    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:19.737534    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:20.235862    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:20.235876    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:20.235881    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:20.235883    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:20.237086    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:20.735905    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:20.735922    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:20.735927    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:20.735930    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:20.737429    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:20.737780    2850 node_ready.go:53] node "ha-881000" has status "Ready":"False"
	I0708 12:44:21.235787    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:21.235797    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:21.235801    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:21.235804    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:21.238836    2850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0708 12:44:21.735864    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:21.735882    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:21.735887    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:21.735890    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:21.737662    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:22.235745    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:22.235752    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:22.235756    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:22.235758    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:22.237129    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:22.735795    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:22.735809    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:22.735814    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:22.735816    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:22.737382    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:23.235749    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:23.235765    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:23.235775    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:23.235778    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:23.236820    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:23.237151    2850 node_ready.go:53] node "ha-881000" has status "Ready":"False"
	I0708 12:44:23.735786    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:23.735801    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:23.735806    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:23.735809    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:23.737333    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:24.235786    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:24.235803    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:24.235822    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:24.235825    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:24.237083    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:24.735772    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:24.735789    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:24.735794    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:24.735820    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:24.737353    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:25.235676    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:25.235686    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:25.235689    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:25.235691    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:25.236773    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:25.735738    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:25.735757    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:25.735763    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:25.735765    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:25.737437    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:25.737856    2850 node_ready.go:53] node "ha-881000" has status "Ready":"False"
	I0708 12:44:26.234556    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:26.234577    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:26.234587    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:26.234589    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:26.235839    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:26.735724    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:26.735739    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:26.735743    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:26.735746    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:26.736780    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:27.235663    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:27.235677    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:27.235684    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:27.235687    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:27.236728    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:27.735660    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:27.735676    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:27.735681    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:27.735683    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:27.737284    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:28.235615    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:28.235623    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:28.235627    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:28.235629    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:28.236823    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:28.237230    2850 node_ready.go:53] node "ha-881000" has status "Ready":"False"
	I0708 12:44:28.735637    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:28.735647    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:28.735651    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:28.735654    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:28.736768    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:29.235642    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:29.235655    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:29.235660    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:29.235662    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:29.236759    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:29.735679    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:29.735694    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:29.735699    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:29.735702    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:29.737248    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:30.235715    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:30.235732    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:30.235736    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:30.235738    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:30.237139    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:30.237360    2850 node_ready.go:53] node "ha-881000" has status "Ready":"False"
	I0708 12:44:30.735612    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:30.735621    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:30.735625    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:30.735628    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:30.737092    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:31.235569    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:31.235579    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:31.235582    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:31.235584    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:31.236633    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:31.735612    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:31.735629    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:31.735634    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:31.735636    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:31.737295    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:32.235587    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:32.235601    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:32.235606    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:32.235608    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:32.236778    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:32.735526    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:32.735536    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:32.735540    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:32.735543    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:32.736711    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:32.736963    2850 node_ready.go:53] node "ha-881000" has status "Ready":"False"
	I0708 12:44:33.235522    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:33.235531    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:33.235535    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:33.235537    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:33.236642    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:33.735561    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:33.735579    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:33.735583    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:33.735587    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:33.737423    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:34.234768    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:34.234774    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:34.234778    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:34.234782    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:34.235720    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:44:34.735499    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:34.735513    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:34.735517    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:34.735519    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:34.737106    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:34.737386    2850 node_ready.go:53] node "ha-881000" has status "Ready":"False"
	I0708 12:44:35.235461    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:35.235468    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:35.235471    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:35.235473    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:35.236519    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:35.735547    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:35.735566    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:35.735571    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:35.735573    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:35.737177    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:36.235487    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:36.235497    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:36.235501    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:36.235504    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:36.236545    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:36.735449    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:36.735460    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:36.735463    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:36.735465    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:36.736519    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:37.235434    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:37.235448    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:37.235453    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:37.235455    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:37.236808    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:37.237168    2850 node_ready.go:53] node "ha-881000" has status "Ready":"False"
	I0708 12:44:37.735447    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:37.735461    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:37.735466    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:37.735468    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:37.737064    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:38.235384    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:38.235396    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:38.235400    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:38.235402    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:38.236438    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:38.735456    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:38.735486    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:38.735492    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:38.735494    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:38.737102    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:39.235386    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:39.235400    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:39.235405    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:39.235406    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:39.236435    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:39.735371    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:39.735383    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:39.735388    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:39.735389    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:39.736892    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:39.737147    2850 node_ready.go:53] node "ha-881000" has status "Ready":"False"
	I0708 12:44:40.235353    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:40.235365    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:40.235369    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:40.235371    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:40.236374    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:44:40.735358    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:40.735368    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:40.735373    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:40.735375    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:40.736868    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:41.235350    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:41.235362    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:41.235366    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:41.235368    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:41.236415    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:41.734992    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:41.735008    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:41.735015    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:41.735017    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:41.736518    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:42.235065    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:42.235078    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:42.235083    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:42.235090    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:42.236101    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:42.236392    2850 node_ready.go:53] node "ha-881000" has status "Ready":"False"
	I0708 12:44:42.735337    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:42.735358    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:42.735363    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:42.735365    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:42.737010    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:43.235275    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:43.235291    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:43.235296    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:43.235298    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:43.236740    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:43.735299    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:43.735318    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:43.735335    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:43.735344    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:43.736716    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:44.234553    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:44.234566    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:44.234571    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:44.234573    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:44.235683    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:44.735264    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:44.735279    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:44.735289    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:44.735295    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:44.737018    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:44.737323    2850 node_ready.go:53] node "ha-881000" has status "Ready":"False"
	I0708 12:44:45.235322    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:45.235339    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:45.235343    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:45.235346    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:45.236568    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:45.735272    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:45.735286    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:45.735291    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:45.735292    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:45.736686    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:46.235249    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:46.235265    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:46.235269    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:46.235274    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:46.236232    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:44:46.734093    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:46.734107    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:46.734111    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:46.734113    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:46.735581    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:47.235190    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:47.235202    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:47.235206    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:47.235209    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:47.236406    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:47.236650    2850 node_ready.go:53] node "ha-881000" has status "Ready":"False"
	I0708 12:44:47.735215    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:47.735228    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:47.735232    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:47.735234    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:47.736259    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:48.233546    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:48.233578    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:48.233583    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:48.233585    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:48.234802    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:48.735158    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:48.735172    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:48.735177    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:48.735182    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:48.736872    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:49.233644    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:49.233670    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:49.233674    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:49.233677    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:49.234965    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:49.735126    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:49.735140    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:49.735145    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:49.735147    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:49.736687    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:49.736960    2850 node_ready.go:53] node "ha-881000" has status "Ready":"False"
	I0708 12:44:50.235134    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:50.235150    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:50.235154    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:50.235156    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:50.236547    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:50.735176    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:50.735195    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:50.735199    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:50.735202    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:50.736808    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:51.235103    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:51.235114    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:51.235118    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:51.235120    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:51.236309    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:51.735098    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:51.735112    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:51.735116    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:51.735119    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:51.736598    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:52.235093    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:52.235104    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:52.235109    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:52.235111    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:52.236547    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:52.236770    2850 node_ready.go:53] node "ha-881000" has status "Ready":"False"
	I0708 12:44:52.735055    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:52.735066    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:52.735071    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:52.735073    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:52.736570    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:53.235045    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:53.235062    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:53.235066    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:53.235069    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:53.236349    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:53.735079    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:53.735097    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:53.735102    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:53.735105    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:53.736701    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:54.235019    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:54.235031    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:54.235036    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:54.235037    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:54.235970    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:44:54.735046    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:54.735062    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:54.735066    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:54.735068    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:54.736566    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:54.736815    2850 node_ready.go:53] node "ha-881000" has status "Ready":"False"
	I0708 12:44:55.235012    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:55.235022    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:55.235025    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:55.235027    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:55.236372    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:55.735033    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:55.735049    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:55.735056    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:55.735059    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:55.736673    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:56.234979    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:56.234992    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:56.234995    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:56.234998    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:56.235922    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:44:56.236329    2850 node_ready.go:49] node "ha-881000" has status "Ready":"True"
	I0708 12:44:56.236344    2850 node_ready.go:38] duration metric: took 37.503461958s for node "ha-881000" to be "Ready" ...
	I0708 12:44:56.236348    2850 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 12:44:56.236370    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0708 12:44:56.236374    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:56.236377    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:56.236381    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:56.237564    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:56.239472    2850 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2646x" in "kube-system" namespace to be "Ready" ...
	I0708 12:44:56.239501    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:44:56.239505    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:56.239509    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:56.239511    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:56.240195    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:44:56.240468    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:56.240474    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:56.240477    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:56.240479    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:56.241124    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:44:56.741438    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:44:56.741470    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:56.741477    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:56.741479    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:56.742848    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:56.743195    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:56.743203    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:56.743206    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:56.743208    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:56.743986    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:44:57.241576    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:44:57.241586    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:57.241590    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:57.241591    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:57.242904    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:57.243256    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:57.243260    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:57.243263    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:57.243266    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:57.244066    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:44:57.740852    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:44:57.740873    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:57.740879    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:57.740882    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:57.742355    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:57.742704    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:57.742711    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:57.742713    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:57.742715    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:57.743435    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:44:58.241528    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:44:58.241540    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:58.241543    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:58.241546    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:58.242831    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:58.243203    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:58.243210    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:58.243213    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:58.243216    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:58.244052    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:44:58.244331    2850 pod_ready.go:102] pod "coredns-7db6d8ff4d-2646x" in "kube-system" namespace has status "Ready":"False"
	I0708 12:44:58.741564    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:44:58.741581    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:58.741585    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:58.741587    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:58.743058    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:58.743429    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:58.743436    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:58.743439    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:58.743448    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:58.744232    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:44:59.241527    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:44:59.241554    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:59.241558    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:59.241561    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:59.243100    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:59.243470    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:59.243475    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:59.243479    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:59.243480    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:59.244243    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:44:59.741559    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:44:59.741574    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:59.741581    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:59.741590    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:59.743220    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:44:59.743604    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:44:59.743609    2850 round_trippers.go:469] Request Headers:
	I0708 12:44:59.743612    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:44:59.743616    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:44:59.744456    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:00.241503    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:00.241514    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:00.241519    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:00.241521    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:00.242908    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:00.243345    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:00.243349    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:00.243353    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:00.243355    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:00.244097    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:00.244429    2850 pod_ready.go:102] pod "coredns-7db6d8ff4d-2646x" in "kube-system" namespace has status "Ready":"False"
	I0708 12:45:00.741467    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:00.741474    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:00.741478    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:00.741481    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:00.742721    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:00.743036    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:00.743040    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:00.743043    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:00.743045    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:00.743830    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:01.241466    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:01.241480    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:01.241485    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:01.241487    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:01.242976    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:01.243375    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:01.243378    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:01.243381    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:01.243383    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:01.244246    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:01.741500    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:01.741510    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:01.741515    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:01.741517    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:01.742994    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:01.743311    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:01.743315    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:01.743317    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:01.743320    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:01.744315    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:02.241478    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:02.241493    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:02.241502    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:02.241504    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:02.243027    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:02.243340    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:02.243346    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:02.243349    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:02.243351    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:02.244132    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:02.244434    2850 pod_ready.go:102] pod "coredns-7db6d8ff4d-2646x" in "kube-system" namespace has status "Ready":"False"
	I0708 12:45:02.741253    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:02.741263    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:02.741267    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:02.741268    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:02.742891    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:02.743290    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:02.743299    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:02.743303    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:02.743305    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:02.744218    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:03.241431    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:03.241448    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:03.241451    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:03.241457    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:03.243109    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:03.243526    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:03.243530    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:03.243534    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:03.243539    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:03.244375    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:03.741453    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:03.741471    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:03.741475    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:03.741477    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:03.743217    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:03.743611    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:03.743617    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:03.743619    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:03.743622    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:03.744468    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:04.241377    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:04.241388    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:04.241398    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:04.241401    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:04.242470    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:04.242979    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:04.242986    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:04.242990    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:04.242991    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:04.243836    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:04.741446    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:04.741465    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:04.741471    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:04.741474    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:04.743225    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:04.743625    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:04.743630    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:04.743634    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:04.743636    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:04.744628    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:04.744826    2850 pod_ready.go:102] pod "coredns-7db6d8ff4d-2646x" in "kube-system" namespace has status "Ready":"False"
	I0708 12:45:05.241444    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:05.241474    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:05.241480    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:05.241482    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:05.243001    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:05.243367    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:05.243376    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:05.243380    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:05.243382    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:05.244265    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:05.741419    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:05.741440    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:05.741450    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:05.741453    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:05.742990    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:05.743242    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:05.743245    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:05.743248    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:05.743250    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:05.743991    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:06.241340    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:06.241346    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:06.241353    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:06.241355    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:06.242445    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:06.242855    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:06.242859    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:06.242863    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:06.242868    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:06.243590    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:06.741354    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:06.741367    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:06.741371    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:06.741373    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:06.742464    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:06.742740    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:06.742744    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:06.742747    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:06.742749    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:06.743410    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:07.241315    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:07.241326    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:07.241338    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:07.241341    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:07.242749    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:07.243179    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:07.243183    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:07.243187    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:07.243189    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:07.243964    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:07.244151    2850 pod_ready.go:102] pod "coredns-7db6d8ff4d-2646x" in "kube-system" namespace has status "Ready":"False"
	I0708 12:45:07.739557    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:07.739583    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:07.739588    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:07.739590    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:07.740828    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:07.741185    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:07.741191    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:07.741194    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:07.741196    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:07.741930    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:08.241296    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:08.241313    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:08.241320    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:08.241323    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:08.242645    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:08.243001    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:08.243005    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:08.243007    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:08.243009    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:08.243876    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:08.741300    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:08.741314    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:08.741318    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:08.741320    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:08.742719    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:08.743058    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:08.743065    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:08.743069    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:08.743072    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:08.743872    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:09.240641    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:09.240654    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:09.240659    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:09.240661    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:09.242233    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:09.242515    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:09.242522    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:09.242525    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:09.242528    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:09.243385    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:09.741272    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:09.741288    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:09.741292    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:09.741295    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:09.742893    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:09.743229    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:09.743234    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:09.743238    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:09.743241    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:09.744099    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:09.744363    2850 pod_ready.go:102] pod "coredns-7db6d8ff4d-2646x" in "kube-system" namespace has status "Ready":"False"
	I0708 12:45:10.241267    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:10.241278    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:10.241282    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:10.241284    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:10.242588    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:10.242866    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:10.242870    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:10.242873    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:10.242874    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:10.243647    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:10.741298    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:10.741315    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:10.741320    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:10.741322    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:10.742987    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:10.743392    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:10.743400    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:10.743404    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:10.743406    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:10.744188    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:11.241183    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:11.241194    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:11.241200    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:11.241206    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:11.242415    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:11.242712    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:11.242716    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:11.242720    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:11.242721    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:11.243473    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:11.741194    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:11.741205    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:11.741210    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:11.741215    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:11.742465    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:11.742775    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:11.742786    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:11.742788    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:11.742790    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:11.743587    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:12.241201    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:12.241213    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:12.241217    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:12.241219    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:12.242626    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:12.243039    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:12.243043    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:12.243047    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:12.243049    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:12.243917    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:12.244324    2850 pod_ready.go:102] pod "coredns-7db6d8ff4d-2646x" in "kube-system" namespace has status "Ready":"False"
	I0708 12:45:12.741218    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:12.741232    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:12.741236    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:12.741238    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:12.742818    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:12.743152    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:12.743159    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:12.743162    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:12.743165    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:12.744041    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:13.241184    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:13.241193    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:13.241197    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:13.241200    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:13.242516    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:13.242925    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:13.242931    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:13.242934    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:13.242937    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:13.243758    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:13.741196    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:13.741225    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:13.741230    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:13.741232    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:13.742856    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:13.743178    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:13.743186    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:13.743189    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:13.743192    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:13.743979    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:14.241154    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:14.241167    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:14.241171    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:14.241173    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:14.242419    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:14.242781    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:14.242785    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:14.242788    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:14.242790    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:14.243637    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:14.741167    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:14.741183    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:14.741187    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:14.741189    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:14.742829    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:14.743216    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:14.743220    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:14.743223    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:14.743225    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:14.744156    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:14.744560    2850 pod_ready.go:102] pod "coredns-7db6d8ff4d-2646x" in "kube-system" namespace has status "Ready":"False"
	I0708 12:45:15.241109    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:15.241121    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:15.241125    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:15.241127    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:15.242193    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:15.242557    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:15.242562    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:15.242564    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:15.242566    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:15.243353    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:15.740357    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:15.740371    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:15.740375    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:15.740377    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:15.741721    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:15.742101    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:15.742108    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:15.742111    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:15.742113    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:15.742969    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:16.241114    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:16.241124    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:16.241129    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:16.241132    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:16.242413    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:16.242874    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:16.242878    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:16.242881    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:16.242883    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:16.243725    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:16.740853    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:16.740868    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:16.740871    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:16.740873    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:16.742018    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:16.742317    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:16.742323    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:16.742327    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:16.742329    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:16.743247    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:17.241095    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:17.241105    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:17.241108    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:17.241110    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:17.242608    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:17.243027    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:17.243033    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:17.243037    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:17.243039    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:17.243877    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:17.244151    2850 pod_ready.go:102] pod "coredns-7db6d8ff4d-2646x" in "kube-system" namespace has status "Ready":"False"
	I0708 12:45:17.741110    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:17.741131    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:17.741139    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:17.741141    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:17.742557    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:17.742976    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:17.742980    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:17.742983    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:17.742986    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:17.743831    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:18.241102    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:18.241115    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:18.241120    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:18.241122    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:18.242443    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:18.242853    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:18.242860    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:18.242863    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:18.242865    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:18.243717    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:18.741041    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:18.741056    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:18.741061    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:18.741063    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:18.742516    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:18.742874    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:18.742878    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:18.742881    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:18.742883    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:18.743639    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:19.241031    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:19.241056    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:19.241066    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:19.241068    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:19.242350    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:19.242645    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:19.242654    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:19.242656    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:19.242658    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:19.243475    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:19.740005    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:19.740022    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:19.740027    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:19.740031    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:19.741418    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:19.741782    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:19.741786    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:19.741790    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:19.741792    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:19.742669    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:19.742907    2850 pod_ready.go:102] pod "coredns-7db6d8ff4d-2646x" in "kube-system" namespace has status "Ready":"False"
	I0708 12:45:20.241012    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:20.241026    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:20.241038    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:20.241041    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:20.242251    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:20.242636    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:20.242642    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:20.242645    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:20.242648    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:20.243540    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:20.741029    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2646x
	I0708 12:45:20.741049    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:20.741075    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:20.741079    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:20.742509    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:20.742986    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:20.742991    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:20.742994    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:20.742996    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:20.743926    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:20.744128    2850 pod_ready.go:92] pod "coredns-7db6d8ff4d-2646x" in "kube-system" namespace has status "Ready":"True"
	I0708 12:45:20.744136    2850 pod_ready.go:81] duration metric: took 24.50524075s for pod "coredns-7db6d8ff4d-2646x" in "kube-system" namespace to be "Ready" ...
	I0708 12:45:20.744143    2850 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rlj9v" in "kube-system" namespace to be "Ready" ...
	I0708 12:45:20.744168    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rlj9v
	I0708 12:45:20.744171    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:20.744175    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:20.744178    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:20.744921    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:20.745179    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:20.745186    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:20.745189    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:20.745191    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:20.746038    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:20.746264    2850 pod_ready.go:92] pod "coredns-7db6d8ff4d-rlj9v" in "kube-system" namespace has status "Ready":"True"
	I0708 12:45:20.746269    2850 pod_ready.go:81] duration metric: took 2.122458ms for pod "coredns-7db6d8ff4d-rlj9v" in "kube-system" namespace to be "Ready" ...
	I0708 12:45:20.746273    2850 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-881000" in "kube-system" namespace to be "Ready" ...
	I0708 12:45:20.746294    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-881000
	I0708 12:45:20.746297    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:20.746302    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:20.746305    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:20.747068    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:20.747506    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:20.747511    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:20.747513    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:20.747516    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:20.748146    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:20.748358    2850 pod_ready.go:92] pod "etcd-ha-881000" in "kube-system" namespace has status "Ready":"True"
	I0708 12:45:20.748364    2850 pod_ready.go:81] duration metric: took 2.08775ms for pod "etcd-ha-881000" in "kube-system" namespace to be "Ready" ...
	I0708 12:45:20.748368    2850 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-881000" in "kube-system" namespace to be "Ready" ...
	I0708 12:45:20.748384    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-881000
	I0708 12:45:20.748387    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:20.748399    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:20.748402    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:20.749140    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:20.749502    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:20.749509    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:20.749512    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:20.749514    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:20.750156    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:20.750372    2850 pod_ready.go:92] pod "kube-apiserver-ha-881000" in "kube-system" namespace has status "Ready":"True"
	I0708 12:45:20.750377    2850 pod_ready.go:81] duration metric: took 2.005875ms for pod "kube-apiserver-ha-881000" in "kube-system" namespace to be "Ready" ...
	I0708 12:45:20.750381    2850 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-881000" in "kube-system" namespace to be "Ready" ...
	I0708 12:45:20.750401    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-881000
	I0708 12:45:20.750405    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:20.750408    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:20.750411    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:20.751149    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:20.751437    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:20.751443    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:20.751445    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:20.751448    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:20.752108    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:20.752420    2850 pod_ready.go:92] pod "kube-controller-manager-ha-881000" in "kube-system" namespace has status "Ready":"True"
	I0708 12:45:20.752423    2850 pod_ready.go:81] duration metric: took 2.038708ms for pod "kube-controller-manager-ha-881000" in "kube-system" namespace to be "Ready" ...
	I0708 12:45:20.752427    2850 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nqzkk" in "kube-system" namespace to be "Ready" ...
	I0708 12:45:20.943042    2850 request.go:629] Waited for 190.595625ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nqzkk
	I0708 12:45:20.943064    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nqzkk
	I0708 12:45:20.943068    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:20.943071    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:20.943073    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:20.944212    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:21.143013    2850 request.go:629] Waited for 198.567333ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:21.143041    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:21.143044    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:21.143048    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:21.143052    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:21.144354    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:21.144551    2850 pod_ready.go:92] pod "kube-proxy-nqzkk" in "kube-system" namespace has status "Ready":"True"
	I0708 12:45:21.144558    2850 pod_ready.go:81] duration metric: took 392.136375ms for pod "kube-proxy-nqzkk" in "kube-system" namespace to be "Ready" ...
	I0708 12:45:21.144563    2850 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-881000" in "kube-system" namespace to be "Ready" ...
	I0708 12:45:21.342994    2850 request.go:629] Waited for 198.415125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-881000
	I0708 12:45:21.343019    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-881000
	I0708 12:45:21.343023    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:21.343026    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:21.343029    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:21.344100    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:21.543018    2850 request.go:629] Waited for 198.71175ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:21.543057    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-881000
	I0708 12:45:21.543060    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:21.543064    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:21.543066    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:21.544345    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:21.544618    2850 pod_ready.go:92] pod "kube-scheduler-ha-881000" in "kube-system" namespace has status "Ready":"True"
	I0708 12:45:21.544628    2850 pod_ready.go:81] duration metric: took 400.0705ms for pod "kube-scheduler-ha-881000" in "kube-system" namespace to be "Ready" ...
	I0708 12:45:21.544636    2850 pod_ready.go:38] duration metric: took 25.308887292s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 12:45:21.544648    2850 api_server.go:52] waiting for apiserver process to appear ...
	I0708 12:45:21.544775    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 12:45:21.552615    2850 logs.go:276] 2 containers: [5c7a6d2a7b0f db173c1aa7e6]
	I0708 12:45:21.552684    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 12:45:21.558679    2850 logs.go:276] 2 containers: [8949c5b568b1 5c4705f221f3]
	I0708 12:45:21.558737    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 12:45:21.564299    2850 logs.go:276] 4 containers: [6c32e54a9067 a01fbba041f3 57f745d9e2f1 e5decdf53e42]
	I0708 12:45:21.564354    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 12:45:21.569602    2850 logs.go:276] 2 containers: [6302ef35341b ed9f0e91126a]
	I0708 12:45:21.569653    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 12:45:21.575032    2850 logs.go:276] 2 containers: [6f04b4be84c2 e3b0434a308b]
	I0708 12:45:21.575087    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 12:45:21.580589    2850 logs.go:276] 2 containers: [493877591d89 cc323cbcdc6d]
	I0708 12:45:21.580646    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 12:45:21.586131    2850 logs.go:276] 2 containers: [f18946e45a94 8c20b27d4019]
	I0708 12:45:21.586187    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 12:45:21.591513    2850 logs.go:276] 2 containers: [f496d2b5c569 b545f59f90f8]
	I0708 12:45:21.591526    2850 logs.go:123] Gathering logs for kube-controller-manager [cc323cbcdc6d] ...
	I0708 12:45:21.591533    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc323cbcdc6d"
	I0708 12:45:21.607202    2850 logs.go:123] Gathering logs for container status ...
	I0708 12:45:21.607214    2850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 12:45:21.624879    2850 logs.go:123] Gathering logs for kube-apiserver [5c7a6d2a7b0f] ...
	I0708 12:45:21.624889    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c7a6d2a7b0f"
	I0708 12:45:21.635663    2850 logs.go:123] Gathering logs for coredns [6c32e54a9067] ...
	I0708 12:45:21.635673    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c32e54a9067"
	I0708 12:45:21.642127    2850 logs.go:123] Gathering logs for coredns [57f745d9e2f1] ...
	I0708 12:45:21.642136    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57f745d9e2f1"
	I0708 12:45:21.648280    2850 logs.go:123] Gathering logs for coredns [e5decdf53e42] ...
	I0708 12:45:21.648288    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5decdf53e42"
	I0708 12:45:21.655392    2850 logs.go:123] Gathering logs for kube-scheduler [6302ef35341b] ...
	I0708 12:45:21.655399    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6302ef35341b"
	I0708 12:45:21.662654    2850 logs.go:123] Gathering logs for kube-proxy [6f04b4be84c2] ...
	I0708 12:45:21.662665    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f04b4be84c2"
	I0708 12:45:21.675745    2850 logs.go:123] Gathering logs for describe nodes ...
	I0708 12:45:21.675752    2850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 12:45:21.723651    2850 logs.go:123] Gathering logs for kube-proxy [e3b0434a308b] ...
	I0708 12:45:21.723664    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3b0434a308b"
	I0708 12:45:21.731559    2850 logs.go:123] Gathering logs for kindnet [8c20b27d4019] ...
	I0708 12:45:21.731568    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c20b27d4019"
	I0708 12:45:21.738168    2850 logs.go:123] Gathering logs for storage-provisioner [b545f59f90f8] ...
	I0708 12:45:21.738179    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b545f59f90f8"
	I0708 12:45:21.744758    2850 logs.go:123] Gathering logs for storage-provisioner [f496d2b5c569] ...
	I0708 12:45:21.744768    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f496d2b5c569"
	I0708 12:45:21.751407    2850 logs.go:123] Gathering logs for Docker ...
	I0708 12:45:21.751417    2850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 12:45:21.772552    2850 logs.go:123] Gathering logs for kubelet ...
	I0708 12:45:21.772559    2850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 12:45:21.798834    2850 logs.go:123] Gathering logs for dmesg ...
	I0708 12:45:21.798844    2850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 12:45:21.803479    2850 logs.go:123] Gathering logs for kube-apiserver [db173c1aa7e6] ...
	I0708 12:45:21.803488    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db173c1aa7e6"
	I0708 12:45:21.825761    2850 logs.go:123] Gathering logs for etcd [5c4705f221f3] ...
	I0708 12:45:21.825770    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4705f221f3"
	I0708 12:45:21.836486    2850 logs.go:123] Gathering logs for kube-scheduler [ed9f0e91126a] ...
	I0708 12:45:21.836493    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9f0e91126a"
	I0708 12:45:21.848974    2850 logs.go:123] Gathering logs for kindnet [f18946e45a94] ...
	I0708 12:45:21.848986    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f18946e45a94"
	I0708 12:45:21.856358    2850 logs.go:123] Gathering logs for etcd [8949c5b568b1] ...
	I0708 12:45:21.856367    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8949c5b568b1"
	I0708 12:45:21.865831    2850 logs.go:123] Gathering logs for coredns [a01fbba041f3] ...
	I0708 12:45:21.865838    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a01fbba041f3"
	I0708 12:45:21.872089    2850 logs.go:123] Gathering logs for kube-controller-manager [493877591d89] ...
	I0708 12:45:21.872098    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493877591d89"
	I0708 12:45:24.387307    2850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 12:45:24.393583    2850 api_server.go:72] duration metric: took 1m5.787828625s to wait for apiserver process to appear ...
	I0708 12:45:24.393594    2850 api_server.go:88] waiting for apiserver healthz status ...
	I0708 12:45:24.393665    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 12:45:24.400221    2850 logs.go:276] 2 containers: [5c7a6d2a7b0f db173c1aa7e6]
	I0708 12:45:24.400296    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 12:45:24.405445    2850 logs.go:276] 2 containers: [8949c5b568b1 5c4705f221f3]
	I0708 12:45:24.405503    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 12:45:24.410777    2850 logs.go:276] 4 containers: [6c32e54a9067 a01fbba041f3 57f745d9e2f1 e5decdf53e42]
	I0708 12:45:24.410837    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 12:45:24.415915    2850 logs.go:276] 2 containers: [6302ef35341b ed9f0e91126a]
	I0708 12:45:24.415972    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 12:45:24.421293    2850 logs.go:276] 2 containers: [6f04b4be84c2 e3b0434a308b]
	I0708 12:45:24.421346    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 12:45:24.426972    2850 logs.go:276] 2 containers: [493877591d89 cc323cbcdc6d]
	I0708 12:45:24.427024    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 12:45:24.432727    2850 logs.go:276] 2 containers: [f18946e45a94 8c20b27d4019]
	I0708 12:45:24.432774    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 12:45:24.437926    2850 logs.go:276] 2 containers: [f496d2b5c569 b545f59f90f8]
	I0708 12:45:24.437937    2850 logs.go:123] Gathering logs for storage-provisioner [f496d2b5c569] ...
	I0708 12:45:24.437942    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f496d2b5c569"
	I0708 12:45:24.447768    2850 logs.go:123] Gathering logs for describe nodes ...
	I0708 12:45:24.447783    2850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 12:45:24.489600    2850 logs.go:123] Gathering logs for kube-apiserver [db173c1aa7e6] ...
	I0708 12:45:24.489610    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db173c1aa7e6"
	I0708 12:45:24.511141    2850 logs.go:123] Gathering logs for etcd [5c4705f221f3] ...
	I0708 12:45:24.511152    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4705f221f3"
	I0708 12:45:24.521818    2850 logs.go:123] Gathering logs for coredns [6c32e54a9067] ...
	I0708 12:45:24.521827    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c32e54a9067"
	I0708 12:45:24.528443    2850 logs.go:123] Gathering logs for coredns [e5decdf53e42] ...
	I0708 12:45:24.528453    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5decdf53e42"
	I0708 12:45:24.534731    2850 logs.go:123] Gathering logs for kube-proxy [e3b0434a308b] ...
	I0708 12:45:24.534740    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3b0434a308b"
	I0708 12:45:24.548886    2850 logs.go:123] Gathering logs for Docker ...
	I0708 12:45:24.548895    2850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 12:45:24.570100    2850 logs.go:123] Gathering logs for kube-apiserver [5c7a6d2a7b0f] ...
	I0708 12:45:24.570107    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c7a6d2a7b0f"
	I0708 12:45:24.581621    2850 logs.go:123] Gathering logs for coredns [a01fbba041f3] ...
	I0708 12:45:24.581630    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a01fbba041f3"
	I0708 12:45:24.588641    2850 logs.go:123] Gathering logs for kube-proxy [6f04b4be84c2] ...
	I0708 12:45:24.588650    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f04b4be84c2"
	I0708 12:45:24.599223    2850 logs.go:123] Gathering logs for kube-controller-manager [cc323cbcdc6d] ...
	I0708 12:45:24.599232    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc323cbcdc6d"
	I0708 12:45:24.613392    2850 logs.go:123] Gathering logs for kindnet [f18946e45a94] ...
	I0708 12:45:24.613401    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f18946e45a94"
	I0708 12:45:24.619751    2850 logs.go:123] Gathering logs for storage-provisioner [b545f59f90f8] ...
	I0708 12:45:24.619759    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b545f59f90f8"
	I0708 12:45:24.630591    2850 logs.go:123] Gathering logs for container status ...
	I0708 12:45:24.630599    2850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 12:45:24.647058    2850 logs.go:123] Gathering logs for kubelet ...
	I0708 12:45:24.647069    2850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 12:45:24.672902    2850 logs.go:123] Gathering logs for dmesg ...
	I0708 12:45:24.672910    2850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 12:45:24.677837    2850 logs.go:123] Gathering logs for etcd [8949c5b568b1] ...
	I0708 12:45:24.677844    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8949c5b568b1"
	I0708 12:45:24.687305    2850 logs.go:123] Gathering logs for coredns [57f745d9e2f1] ...
	I0708 12:45:24.687312    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57f745d9e2f1"
	I0708 12:45:24.697783    2850 logs.go:123] Gathering logs for kube-scheduler [6302ef35341b] ...
	I0708 12:45:24.697792    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6302ef35341b"
	I0708 12:45:24.704557    2850 logs.go:123] Gathering logs for kindnet [8c20b27d4019] ...
	I0708 12:45:24.704564    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c20b27d4019"
	I0708 12:45:24.711484    2850 logs.go:123] Gathering logs for kube-scheduler [ed9f0e91126a] ...
	I0708 12:45:24.711493    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9f0e91126a"
	I0708 12:45:24.720680    2850 logs.go:123] Gathering logs for kube-controller-manager [493877591d89] ...
	I0708 12:45:24.720687    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493877591d89"
	I0708 12:45:27.236929    2850 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I0708 12:45:27.240051    2850 api_server.go:279] https://192.168.105.5:8443/healthz returned 200:
	ok
	I0708 12:45:27.240085    2850 round_trippers.go:463] GET https://192.168.105.5:8443/version
	I0708 12:45:27.240088    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:27.240093    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:27.240096    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:27.240589    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:27.240629    2850 api_server.go:141] control plane version: v1.30.2
	I0708 12:45:27.240636    2850 api_server.go:131] duration metric: took 2.847107292s to wait for apiserver health ...
	I0708 12:45:27.240641    2850 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 12:45:27.240716    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 12:45:27.254338    2850 logs.go:276] 2 containers: [5c7a6d2a7b0f db173c1aa7e6]
	I0708 12:45:27.254406    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 12:45:27.260147    2850 logs.go:276] 2 containers: [8949c5b568b1 5c4705f221f3]
	I0708 12:45:27.260213    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 12:45:27.266086    2850 logs.go:276] 4 containers: [6c32e54a9067 a01fbba041f3 57f745d9e2f1 e5decdf53e42]
	I0708 12:45:27.266140    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 12:45:27.274117    2850 logs.go:276] 2 containers: [6302ef35341b ed9f0e91126a]
	I0708 12:45:27.274169    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 12:45:27.280079    2850 logs.go:276] 2 containers: [6f04b4be84c2 e3b0434a308b]
	I0708 12:45:27.280136    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 12:45:27.286177    2850 logs.go:276] 2 containers: [493877591d89 cc323cbcdc6d]
	I0708 12:45:27.286231    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 12:45:27.291733    2850 logs.go:276] 2 containers: [f18946e45a94 8c20b27d4019]
	I0708 12:45:27.291788    2850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 12:45:27.297265    2850 logs.go:276] 2 containers: [f496d2b5c569 b545f59f90f8]
	I0708 12:45:27.297282    2850 logs.go:123] Gathering logs for kindnet [f18946e45a94] ...
	I0708 12:45:27.297287    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f18946e45a94"
	I0708 12:45:27.304167    2850 logs.go:123] Gathering logs for kindnet [8c20b27d4019] ...
	I0708 12:45:27.304176    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c20b27d4019"
	I0708 12:45:27.310889    2850 logs.go:123] Gathering logs for kube-proxy [6f04b4be84c2] ...
	I0708 12:45:27.310897    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f04b4be84c2"
	I0708 12:45:27.317712    2850 logs.go:123] Gathering logs for storage-provisioner [f496d2b5c569] ...
	I0708 12:45:27.317720    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f496d2b5c569"
	I0708 12:45:27.323963    2850 logs.go:123] Gathering logs for storage-provisioner [b545f59f90f8] ...
	I0708 12:45:27.323970    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b545f59f90f8"
	I0708 12:45:27.330253    2850 logs.go:123] Gathering logs for describe nodes ...
	I0708 12:45:27.330266    2850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 12:45:27.372208    2850 logs.go:123] Gathering logs for etcd [5c4705f221f3] ...
	I0708 12:45:27.372219    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c4705f221f3"
	I0708 12:45:27.382601    2850 logs.go:123] Gathering logs for coredns [6c32e54a9067] ...
	I0708 12:45:27.382609    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c32e54a9067"
	I0708 12:45:27.389288    2850 logs.go:123] Gathering logs for coredns [57f745d9e2f1] ...
	I0708 12:45:27.389296    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57f745d9e2f1"
	I0708 12:45:27.396159    2850 logs.go:123] Gathering logs for kube-scheduler [6302ef35341b] ...
	I0708 12:45:27.396166    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6302ef35341b"
	I0708 12:45:27.402921    2850 logs.go:123] Gathering logs for Docker ...
	I0708 12:45:27.402929    2850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 12:45:27.423728    2850 logs.go:123] Gathering logs for container status ...
	I0708 12:45:27.423736    2850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 12:45:27.440237    2850 logs.go:123] Gathering logs for kubelet ...
	I0708 12:45:27.440249    2850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 12:45:27.465291    2850 logs.go:123] Gathering logs for kube-apiserver [5c7a6d2a7b0f] ...
	I0708 12:45:27.465301    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c7a6d2a7b0f"
	I0708 12:45:27.476076    2850 logs.go:123] Gathering logs for coredns [a01fbba041f3] ...
	I0708 12:45:27.476087    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a01fbba041f3"
	I0708 12:45:27.482656    2850 logs.go:123] Gathering logs for kube-scheduler [ed9f0e91126a] ...
	I0708 12:45:27.482665    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9f0e91126a"
	I0708 12:45:27.492131    2850 logs.go:123] Gathering logs for kube-controller-manager [493877591d89] ...
	I0708 12:45:27.492138    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 493877591d89"
	I0708 12:45:27.508284    2850 logs.go:123] Gathering logs for kube-controller-manager [cc323cbcdc6d] ...
	I0708 12:45:27.508292    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc323cbcdc6d"
	I0708 12:45:27.522615    2850 logs.go:123] Gathering logs for dmesg ...
	I0708 12:45:27.522624    2850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 12:45:27.527631    2850 logs.go:123] Gathering logs for kube-apiserver [db173c1aa7e6] ...
	I0708 12:45:27.527640    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db173c1aa7e6"
	I0708 12:45:27.549074    2850 logs.go:123] Gathering logs for etcd [8949c5b568b1] ...
	I0708 12:45:27.549082    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8949c5b568b1"
	I0708 12:45:27.559601    2850 logs.go:123] Gathering logs for coredns [e5decdf53e42] ...
	I0708 12:45:27.559613    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5decdf53e42"
	I0708 12:45:27.567251    2850 logs.go:123] Gathering logs for kube-proxy [e3b0434a308b] ...
	I0708 12:45:27.567261    2850 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3b0434a308b"
	I0708 12:45:30.076023    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0708 12:45:30.076041    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:30.076045    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:30.076056    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:30.077861    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:30.079991    2850 system_pods.go:59] 9 kube-system pods found
	I0708 12:45:30.079999    2850 system_pods.go:61] "coredns-7db6d8ff4d-2646x" [5a1aa968-b181-4318-a7f2-fb0f94617bd5] Running
	I0708 12:45:30.080002    2850 system_pods.go:61] "coredns-7db6d8ff4d-rlj9v" [57423cc1-b13f-45c7-b2df-71621270a61f] Running
	I0708 12:45:30.080004    2850 system_pods.go:61] "etcd-ha-881000" [b905dbae-009a-44f3-87e4-756dfae87ce6] Running
	I0708 12:45:30.080005    2850 system_pods.go:61] "kindnet-mmchf" [2f8fecb7-8906-46c9-9d55-c56254b8b3d7] Running
	I0708 12:45:30.080007    2850 system_pods.go:61] "kube-apiserver-ha-881000" [ea5dbd32-5574-42d6-9efd-3956e499027a] Running
	I0708 12:45:30.080018    2850 system_pods.go:61] "kube-controller-manager-ha-881000" [3f0c772a-e298-47e5-a20d-4201060d8e09] Running
	I0708 12:45:30.080021    2850 system_pods.go:61] "kube-proxy-nqzkk" [0037978f-9b19-49c2-a0fd-a7757effb5e9] Running
	I0708 12:45:30.080023    2850 system_pods.go:61] "kube-scheduler-ha-881000" [03ce3397-c2e8-4b90-a33c-11fb0368a30e] Running
	I0708 12:45:30.080025    2850 system_pods.go:61] "storage-provisioner" [62d01d4e-c78c-499e-9905-7ff510f1edea] Running
	I0708 12:45:30.080029    2850 system_pods.go:74] duration metric: took 2.839449625s to wait for pod list to return data ...
	I0708 12:45:30.080034    2850 default_sa.go:34] waiting for default service account to be created ...
	I0708 12:45:30.080073    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/default/serviceaccounts
	I0708 12:45:30.080077    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:30.080080    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:30.080083    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:30.080889    2850 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0708 12:45:30.081147    2850 default_sa.go:45] found service account: "default"
	I0708 12:45:30.081154    2850 default_sa.go:55] duration metric: took 1.117166ms for default service account to be created ...
	I0708 12:45:30.081158    2850 system_pods.go:116] waiting for k8s-apps to be running ...
	I0708 12:45:30.081179    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0708 12:45:30.081182    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:30.081186    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:30.081188    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:30.083151    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:30.084871    2850 system_pods.go:86] 9 kube-system pods found
	I0708 12:45:30.084879    2850 system_pods.go:89] "coredns-7db6d8ff4d-2646x" [5a1aa968-b181-4318-a7f2-fb0f94617bd5] Running
	I0708 12:45:30.084882    2850 system_pods.go:89] "coredns-7db6d8ff4d-rlj9v" [57423cc1-b13f-45c7-b2df-71621270a61f] Running
	I0708 12:45:30.084884    2850 system_pods.go:89] "etcd-ha-881000" [b905dbae-009a-44f3-87e4-756dfae87ce6] Running
	I0708 12:45:30.084886    2850 system_pods.go:89] "kindnet-mmchf" [2f8fecb7-8906-46c9-9d55-c56254b8b3d7] Running
	I0708 12:45:30.084888    2850 system_pods.go:89] "kube-apiserver-ha-881000" [ea5dbd32-5574-42d6-9efd-3956e499027a] Running
	I0708 12:45:30.084890    2850 system_pods.go:89] "kube-controller-manager-ha-881000" [3f0c772a-e298-47e5-a20d-4201060d8e09] Running
	I0708 12:45:30.084903    2850 system_pods.go:89] "kube-proxy-nqzkk" [0037978f-9b19-49c2-a0fd-a7757effb5e9] Running
	I0708 12:45:30.084907    2850 system_pods.go:89] "kube-scheduler-ha-881000" [03ce3397-c2e8-4b90-a33c-11fb0368a30e] Running
	I0708 12:45:30.084909    2850 system_pods.go:89] "storage-provisioner" [62d01d4e-c78c-499e-9905-7ff510f1edea] Running
	I0708 12:45:30.084912    2850 system_pods.go:126] duration metric: took 3.7505ms to wait for k8s-apps to be running ...
	I0708 12:45:30.084917    2850 system_svc.go:44] waiting for kubelet service to be running ....
	I0708 12:45:30.084981    2850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 12:45:30.090942    2850 system_svc.go:56] duration metric: took 6.022875ms WaitForService to wait for kubelet
	I0708 12:45:30.090950    2850 kubeadm.go:576] duration metric: took 1m11.485335084s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 12:45:30.090960    2850 node_conditions.go:102] verifying NodePressure condition ...
	I0708 12:45:30.090991    2850 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes
	I0708 12:45:30.090994    2850 round_trippers.go:469] Request Headers:
	I0708 12:45:30.090998    2850 round_trippers.go:473]     Accept: application/json, */*
	I0708 12:45:30.091001    2850 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0708 12:45:30.092084    2850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0708 12:45:30.092353    2850 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 12:45:30.092360    2850 node_conditions.go:123] node cpu capacity is 2
	I0708 12:45:30.092366    2850 node_conditions.go:105] duration metric: took 1.40375ms to run NodePressure ...
	I0708 12:45:30.092373    2850 start.go:240] waiting for startup goroutines ...
	I0708 12:45:30.092377    2850 start.go:245] waiting for cluster config update ...
	I0708 12:45:30.092383    2850 start.go:254] writing updated cluster config ...
	I0708 12:45:30.092691    2850 ssh_runner.go:195] Run: rm -f paused
	I0708 12:45:30.122217    2850 start.go:600] kubectl: 1.29.2, cluster: 1.30.2 (minor skew: 1)
	I0708 12:45:30.126527    2850 out.go:177] * Done! kubectl is now configured to use "ha-881000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jul 08 19:44:47 ha-881000 dockerd[896]: time="2024-07-08T19:44:47.292105245Z" level=info msg="shim disconnected" id=b545f59f90f80f0cdf0042b37be15da16017501ae82b914b769f62ea576231fa namespace=moby
	Jul 08 19:44:47 ha-881000 dockerd[896]: time="2024-07-08T19:44:47.292246613Z" level=warning msg="cleaning up after shim disconnected" id=b545f59f90f80f0cdf0042b37be15da16017501ae82b914b769f62ea576231fa namespace=moby
	Jul 08 19:44:47 ha-881000 dockerd[896]: time="2024-07-08T19:44:47.292267079Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 08 19:45:13 ha-881000 dockerd[896]: time="2024-07-08T19:45:13.126056937Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 08 19:45:13 ha-881000 dockerd[896]: time="2024-07-08T19:45:13.126118427Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 08 19:45:13 ha-881000 dockerd[896]: time="2024-07-08T19:45:13.126127139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:45:13 ha-881000 dockerd[896]: time="2024-07-08T19:45:13.126182709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:45:19 ha-881000 dockerd[896]: time="2024-07-08T19:45:19.940610548Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 08 19:45:19 ha-881000 dockerd[896]: time="2024-07-08T19:45:19.940668194Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 08 19:45:19 ha-881000 dockerd[896]: time="2024-07-08T19:45:19.940674196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:45:19 ha-881000 dockerd[896]: time="2024-07-08T19:45:19.940701706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:45:19 ha-881000 dockerd[896]: time="2024-07-08T19:45:19.943181601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 08 19:45:19 ha-881000 dockerd[896]: time="2024-07-08T19:45:19.943203776Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 08 19:45:19 ha-881000 dockerd[896]: time="2024-07-08T19:45:19.943208486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:45:19 ha-881000 dockerd[896]: time="2024-07-08T19:45:19.943233495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:45:20 ha-881000 cri-dockerd[1141]: time="2024-07-08T19:45:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/23677c502488831db518edb8bbdf324cf64b638d6fe121190bb059ceb940138a/resolv.conf as [nameserver 192.168.105.1]"
	Jul 08 19:45:20 ha-881000 cri-dockerd[1141]: time="2024-07-08T19:45:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7011510af1b75082a5739ec139795a85e75ff4c104b475ff6052b64c891ac506/resolv.conf as [nameserver 192.168.105.1]"
	Jul 08 19:45:20 ha-881000 dockerd[896]: time="2024-07-08T19:45:20.045893426Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 08 19:45:20 ha-881000 dockerd[896]: time="2024-07-08T19:45:20.045938775Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 08 19:45:20 ha-881000 dockerd[896]: time="2024-07-08T19:45:20.045946903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:45:20 ha-881000 dockerd[896]: time="2024-07-08T19:45:20.045977830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:45:20 ha-881000 dockerd[896]: time="2024-07-08T19:45:20.047576210Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 08 19:45:20 ha-881000 dockerd[896]: time="2024-07-08T19:45:20.047653153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 08 19:45:20 ha-881000 dockerd[896]: time="2024-07-08T19:45:20.047679412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:45:20 ha-881000 dockerd[896]: time="2024-07-08T19:45:20.047758523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                      CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	6c32e54a90678       2437cf7621777                                                                              11 seconds ago       Running             coredns                   1                   7011510af1b75       coredns-7db6d8ff4d-2646x
	a01fbba041f3a       2437cf7621777                                                                              11 seconds ago       Running             coredns                   1                   23677c5024888       coredns-7db6d8ff4d-rlj9v
	f496d2b5c569e       ba04bb24b9575                                                                              18 seconds ago       Running             storage-provisioner       2                   dad919ff93745       storage-provisioner
	f18946e45a948       89d73d416b992                                                                              About a minute ago   Running             kindnet-cni               1                   099ead060d0cd       kindnet-mmchf
	b545f59f90f80       ba04bb24b9575                                                                              About a minute ago   Exited              storage-provisioner       1                   dad919ff93745       storage-provisioner
	6f04b4be84c25       66dbb96a9149f                                                                              About a minute ago   Running             kube-proxy                1                   28a3ff4318c5f       kube-proxy-nqzkk
	6302ef35341bd       c7dd04b1bafeb                                                                              About a minute ago   Running             kube-scheduler            1                   684b59b7d91d5       kube-scheduler-ha-881000
	8949c5b568b19       014faa467e297                                                                              About a minute ago   Running             etcd                      1                   16b5e2057f2c5       etcd-ha-881000
	493877591d899       e1dcc3400d3ea                                                                              About a minute ago   Running             kube-controller-manager   1                   3bd1107ec9cc2       kube-controller-manager-ha-881000
	5c7a6d2a7b0fa       84c601f3f72c8                                                                              About a minute ago   Running             kube-apiserver            1                   c7b8eee4b404a       kube-apiserver-ha-881000
	57f745d9e2f1c       2437cf7621777                                                                              About a minute ago   Exited              coredns                   0                   e337c3f92f0c7       coredns-7db6d8ff4d-rlj9v
	e5decdf53e42b       2437cf7621777                                                                              About a minute ago   Exited              coredns                   0                   1752461159c80       coredns-7db6d8ff4d-2646x
	8c20b27d40191       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8   2 minutes ago        Exited              kindnet-cni               0                   52b9dd42202b7       kindnet-mmchf
	e3b0434a308bd       66dbb96a9149f                                                                              2 minutes ago        Exited              kube-proxy                0                   f031f136a08f5       kube-proxy-nqzkk
	ed9f0e91126a2       c7dd04b1bafeb                                                                              2 minutes ago        Exited              kube-scheduler            0                   e9a1e4f9ec7d4       kube-scheduler-ha-881000
	5c4705f221f30       014faa467e297                                                                              2 minutes ago        Exited              etcd                      0                   59d4e027b0867       etcd-ha-881000
	db173c1aa7e67       84c601f3f72c8                                                                              2 minutes ago        Exited              kube-apiserver            0                   3994029f9ba47       kube-apiserver-ha-881000
	cc323cbcdc6df       e1dcc3400d3ea                                                                              2 minutes ago        Exited              kube-controller-manager   0                   109f63f7b1864       kube-controller-manager-ha-881000
	
	
	==> coredns [57f745d9e2f1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [6c32e54a9067] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	
	
	==> coredns [a01fbba041f3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	
	
	==> coredns [e5decdf53e42] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-881000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-881000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad
	                    minikube.k8s.io/name=ha-881000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_08T12_43_14_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jul 2024 19:43:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-881000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jul 2024 19:45:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jul 2024 19:44:56 +0000   Mon, 08 Jul 2024 19:43:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jul 2024 19:44:56 +0000   Mon, 08 Jul 2024 19:43:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jul 2024 19:44:56 +0000   Mon, 08 Jul 2024 19:43:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jul 2024 19:44:56 +0000   Mon, 08 Jul 2024 19:44:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.5
	  Hostname:    ha-881000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2147456Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2147456Ki
	  pods:               110
	System Info:
	  Machine ID:                 bcb7b02242954eb38ab118c97ee41a44
	  System UUID:                bcb7b02242954eb38ab118c97ee41a44
	  Boot ID:                    93e628f2-f162-4f4e-a0c0-1d052ecf72d3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-2646x             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m3s
	  kube-system                 coredns-7db6d8ff4d-rlj9v             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m3s
	  kube-system                 etcd-ha-881000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2m17s
	  kube-system                 kindnet-mmchf                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m3s
	  kube-system                 kube-apiserver-ha-881000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m18s
	  kube-system                 kube-controller-manager-ha-881000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m17s
	  kube-system                 kube-proxy-nqzkk                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m3s
	  kube-system                 kube-scheduler-ha-881000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m17s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m3s               kube-proxy       
	  Normal  Starting                 74s                kube-proxy       
	  Normal  NodeHasSufficientPID     2m17s              kubelet          Node ha-881000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m17s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m17s              kubelet          Node ha-881000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m17s              kubelet          Node ha-881000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m17s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m4s               node-controller  Node ha-881000 event: Registered Node ha-881000 in Controller
	  Normal  NodeReady                119s               kubelet          Node ha-881000 status is now: NodeReady
	  Normal  Starting                 78s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  78s (x8 over 78s)  kubelet          Node ha-881000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    78s (x8 over 78s)  kubelet          Node ha-881000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     78s (x7 over 78s)  kubelet          Node ha-881000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  78s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           63s                node-controller  Node ha-881000 event: Registered Node ha-881000 in Controller
	
	
	==> dmesg <==
	[Jul 8 19:43] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.651397] EINJ: EINJ table not found.
	[  +0.525130] systemd-fstab-generator[117]: Ignoring "noauto" option for root device
	[  +0.160244] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000399] platform regulatory.0: Falling back to sysfs fallback for: regulatory.db
	[Jul 8 19:44] systemd-fstab-generator[496]: Ignoring "noauto" option for root device
	[  +0.074941] systemd-fstab-generator[508]: Ignoring "noauto" option for root device
	[  +1.521517] systemd-fstab-generator[785]: Ignoring "noauto" option for root device
	[  +0.191000] systemd-fstab-generator[855]: Ignoring "noauto" option for root device
	[  +0.086566] systemd-fstab-generator[867]: Ignoring "noauto" option for root device
	[  +0.089024] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +2.317040] systemd-fstab-generator[1094]: Ignoring "noauto" option for root device
	[  +0.081076] systemd-fstab-generator[1106]: Ignoring "noauto" option for root device
	[  +0.072788] systemd-fstab-generator[1118]: Ignoring "noauto" option for root device
	[  +0.092507] systemd-fstab-generator[1133]: Ignoring "noauto" option for root device
	[  +0.198555] systemd-fstab-generator[1255]: Ignoring "noauto" option for root device
	[  +1.047479] systemd-fstab-generator[1388]: Ignoring "noauto" option for root device
	[  +0.036332] kauditd_printk_skb: 307 callbacks suppressed
	[  +5.497903] systemd-fstab-generator[2223]: Ignoring "noauto" option for root device
	[  +0.053664] kauditd_printk_skb: 122 callbacks suppressed
	[  +9.739594] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [5c4705f221f3] <==
	{"level":"info","ts":"2024-07-08T19:43:11.30621Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgVoteResp from 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2024-07-08T19:43:11.306222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became leader at term 2"}
	{"level":"info","ts":"2024-07-08T19:43:11.306227Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 58de0efec1d86300 elected leader 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2024-07-08T19:43:11.314087Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T19:43:11.319356Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"58de0efec1d86300","local-member-attributes":"{Name:ha-881000 ClientURLs:[https://192.168.105.5:2379]}","request-path":"/0/members/58de0efec1d86300/attributes","cluster-id":"cd5c0afff2184bea","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-08T19:43:11.321333Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T19:43:11.321365Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T19:43:11.321373Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T19:43:11.321377Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-08T19:43:11.321518Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-08T19:43:11.325963Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-08T19:43:11.326646Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.5:2379"}
	{"level":"info","ts":"2024-07-08T19:43:11.342065Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-08T19:43:11.342076Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-08T19:43:38.189684Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-08T19:43:38.189722Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"ha-881000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.5:2380"],"advertise-client-urls":["https://192.168.105.5:2379"]}
	{"level":"warn","ts":"2024-07-08T19:43:38.189779Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-08T19:43:38.189828Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	2024/07/08 19:43:38 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-08T19:43:38.204762Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-08T19:43:38.204785Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-08T19:43:38.204808Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"58de0efec1d86300","current-leader-member-id":"58de0efec1d86300"}
	{"level":"info","ts":"2024-07-08T19:43:38.205508Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2024-07-08T19:43:38.205571Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2024-07-08T19:43:38.205578Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-881000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.5:2380"],"advertise-client-urls":["https://192.168.105.5:2379"]}
	
	
	==> etcd [8949c5b568b1] <==
	{"level":"info","ts":"2024-07-08T19:44:13.795023Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-08T19:44:13.795042Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-08T19:44:13.795264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 switched to configuration voters=(6403572207504089856)"}
	{"level":"info","ts":"2024-07-08T19:44:13.795334Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","added-peer-id":"58de0efec1d86300","added-peer-peer-urls":["https://192.168.105.5:2380"]}
	{"level":"info","ts":"2024-07-08T19:44:13.795435Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T19:44:13.795474Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T19:44:13.79952Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-08T19:44:13.802552Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"58de0efec1d86300","initial-advertise-peer-urls":["https://192.168.105.5:2380"],"listen-peer-urls":["https://192.168.105.5:2380"],"advertise-client-urls":["https://192.168.105.5:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.5:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-08T19:44:13.802735Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2024-07-08T19:44:13.803917Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2024-07-08T19:44:13.803857Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-08T19:44:14.790815Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-08T19:44:14.790873Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-08T19:44:14.790887Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgPreVoteResp from 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2024-07-08T19:44:14.790898Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became candidate at term 3"}
	{"level":"info","ts":"2024-07-08T19:44:14.790903Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgVoteResp from 58de0efec1d86300 at term 3"}
	{"level":"info","ts":"2024-07-08T19:44:14.790911Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became leader at term 3"}
	{"level":"info","ts":"2024-07-08T19:44:14.790922Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 58de0efec1d86300 elected leader 58de0efec1d86300 at term 3"}
	{"level":"info","ts":"2024-07-08T19:44:14.791842Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"58de0efec1d86300","local-member-attributes":"{Name:ha-881000 ClientURLs:[https://192.168.105.5:2379]}","request-path":"/0/members/58de0efec1d86300/attributes","cluster-id":"cd5c0afff2184bea","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-08T19:44:14.791848Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-08T19:44:14.791947Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-08T19:44:14.792259Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-08T19:44:14.792275Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-08T19:44:14.794489Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.5:2379"}
	{"level":"info","ts":"2024-07-08T19:44:14.794501Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 19:45:31 up 1 min,  0 users,  load average: 0.52, 0.22, 0.08
	Linux ha-881000 5.10.207 #1 SMP PREEMPT Wed Jul 3 15:00:24 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8c20b27d4019] <==
	I0708 19:43:31.094017       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0708 19:43:31.094078       1 main.go:107] hostIP = 192.168.105.5
	podIP = 192.168.105.5
	I0708 19:43:31.094157       1 main.go:116] setting mtu 1500 for CNI 
	I0708 19:43:31.094166       1 main.go:146] kindnetd IP family: "ipv4"
	I0708 19:43:31.094171       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0708 19:43:31.198442       1 main.go:223] Handling node with IPs: map[192.168.105.5:{}]
	I0708 19:43:31.198484       1 main.go:227] handling current node
	
	
	==> kindnet [f18946e45a94] <==
	I0708 19:44:17.390067       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0708 19:44:17.390337       1 main.go:107] hostIP = 192.168.105.5
	podIP = 192.168.105.5
	I0708 19:44:17.390686       1 main.go:116] setting mtu 1500 for CNI 
	I0708 19:44:17.390726       1 main.go:146] kindnetd IP family: "ipv4"
	I0708 19:44:17.390750       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0708 19:44:47.514460       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0708 19:44:47.519866       1 main.go:223] Handling node with IPs: map[192.168.105.5:{}]
	I0708 19:44:47.519887       1 main.go:227] handling current node
	I0708 19:44:57.528696       1 main.go:223] Handling node with IPs: map[192.168.105.5:{}]
	I0708 19:44:57.528807       1 main.go:227] handling current node
	I0708 19:45:07.530350       1 main.go:223] Handling node with IPs: map[192.168.105.5:{}]
	I0708 19:45:07.530371       1 main.go:227] handling current node
	I0708 19:45:17.533059       1 main.go:223] Handling node with IPs: map[192.168.105.5:{}]
	I0708 19:45:17.533074       1 main.go:227] handling current node
	I0708 19:45:27.543017       1 main.go:223] Handling node with IPs: map[192.168.105.5:{}]
	I0708 19:45:27.543031       1 main.go:227] handling current node
	
	
	==> kube-apiserver [5c7a6d2a7b0f] <==
	I0708 19:44:15.329330       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0708 19:44:15.351148       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0708 19:44:15.351278       1 policy_source.go:224] refreshing policies
	I0708 19:44:15.385492       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0708 19:44:15.385534       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0708 19:44:15.385603       1 shared_informer.go:320] Caches are synced for configmaps
	I0708 19:44:15.385522       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0708 19:44:15.385542       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0708 19:44:15.385549       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0708 19:44:15.388521       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0708 19:44:15.392571       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0708 19:44:15.392584       1 aggregator.go:165] initial CRD sync complete...
	I0708 19:44:15.392587       1 autoregister_controller.go:141] Starting autoregister controller
	I0708 19:44:15.392589       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0708 19:44:15.392591       1 cache.go:39] Caches are synced for autoregister controller
	I0708 19:44:15.418906       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0708 19:44:16.286885       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0708 19:44:16.394124       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.105.5]
	I0708 19:44:16.394786       1 controller.go:615] quota admission added evaluator for: endpoints
	I0708 19:44:16.397137       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0708 19:44:16.833110       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0708 19:44:16.883448       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0708 19:44:16.887128       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0708 19:44:17.077901       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0708 19:44:17.079726       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [db173c1aa7e6] <==
	W0708 19:43:39.200114       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.200117       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.200128       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.200131       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.200141       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.200143       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.200157       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.200157       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.200171       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.200172       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.200185       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.200196       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.200204       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.200212       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.200218       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.201385       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.201401       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.201413       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.201424       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.201435       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.201448       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.201451       1 logging.go:59] [core] [Channel #13 SubChannel #15] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.201463       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.201466       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0708 19:43:39.201477       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [493877591d89] <==
	I0708 19:44:28.328303       1 shared_informer.go:320] Caches are synced for attach detach
	I0708 19:44:28.333697       1 shared_informer.go:320] Caches are synced for ephemeral
	I0708 19:44:28.336197       1 shared_informer.go:320] Caches are synced for GC
	I0708 19:44:28.339397       1 shared_informer.go:320] Caches are synced for HPA
	I0708 19:44:28.340511       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0708 19:44:28.401725       1 shared_informer.go:320] Caches are synced for resource quota
	I0708 19:44:28.413152       1 shared_informer.go:320] Caches are synced for disruption
	I0708 19:44:28.414270       1 shared_informer.go:320] Caches are synced for resource quota
	I0708 19:44:28.425464       1 shared_informer.go:320] Caches are synced for namespace
	I0708 19:44:28.441036       1 shared_informer.go:320] Caches are synced for service account
	I0708 19:44:28.539560       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0708 19:44:28.542690       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0708 19:44:28.542710       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0708 19:44:28.542727       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0708 19:44:28.542745       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0708 19:44:28.909273       1 shared_informer.go:320] Caches are synced for garbage collector
	I0708 19:44:28.931688       1 shared_informer.go:320] Caches are synced for garbage collector
	I0708 19:44:28.931710       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0708 19:44:58.276732       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0708 19:45:20.464652       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="32.178µs"
	I0708 19:45:20.470126       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="48.225µs"
	I0708 19:45:20.483242       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="5.994192ms"
	I0708 19:45:20.483830       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="20.841µs"
	I0708 19:45:20.488501       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="2.582432ms"
	I0708 19:45:20.488962       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="13.921µs"
	
	
	==> kube-controller-manager [cc323cbcdc6d] <==
	I0708 19:43:27.379516       1 shared_informer.go:320] Caches are synced for taint
	I0708 19:43:27.379569       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0708 19:43:27.379665       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-881000"
	I0708 19:43:27.379876       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0708 19:43:27.400488       1 shared_informer.go:320] Caches are synced for cronjob
	I0708 19:43:27.402642       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0708 19:43:27.449755       1 shared_informer.go:320] Caches are synced for disruption
	I0708 19:43:27.456110       1 shared_informer.go:320] Caches are synced for resource quota
	I0708 19:43:27.502148       1 shared_informer.go:320] Caches are synced for attach detach
	I0708 19:43:27.506149       1 shared_informer.go:320] Caches are synced for resource quota
	I0708 19:43:27.911596       1 shared_informer.go:320] Caches are synced for garbage collector
	I0708 19:43:27.957884       1 shared_informer.go:320] Caches are synced for garbage collector
	I0708 19:43:27.957934       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0708 19:43:28.425227       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="314.836166ms"
	I0708 19:43:28.435658       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="10.396584ms"
	I0708 19:43:28.435835       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="149.208µs"
	I0708 19:43:32.844754       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.079µs"
	I0708 19:43:32.851504       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="24.888µs"
	I0708 19:43:32.855122       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="19.561µs"
	I0708 19:43:34.205110       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="20.198µs"
	I0708 19:43:34.217813       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="4.734129ms"
	I0708 19:43:34.217858       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.281µs"
	I0708 19:43:34.230679       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="5.799989ms"
	I0708 19:43:34.230874       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="25.029µs"
	I0708 19:43:37.381649       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [6f04b4be84c2] <==
	I0708 19:44:17.369768       1 server_linux.go:69] "Using iptables proxy"
	I0708 19:44:17.378177       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.5"]
	I0708 19:44:17.395162       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0708 19:44:17.395183       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0708 19:44:17.395193       1 server_linux.go:165] "Using iptables Proxier"
	I0708 19:44:17.397365       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0708 19:44:17.397562       1 server.go:872] "Version info" version="v1.30.2"
	I0708 19:44:17.397573       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0708 19:44:17.398533       1 config.go:192] "Starting service config controller"
	I0708 19:44:17.398634       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0708 19:44:17.398654       1 config.go:101] "Starting endpoint slice config controller"
	I0708 19:44:17.398659       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0708 19:44:17.399167       1 config.go:319] "Starting node config controller"
	I0708 19:44:17.399179       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0708 19:44:17.499637       1 shared_informer.go:320] Caches are synced for node config
	I0708 19:44:17.499645       1 shared_informer.go:320] Caches are synced for service config
	I0708 19:44:17.499654       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [e3b0434a308b] <==
	I0708 19:43:28.503731       1 server_linux.go:69] "Using iptables proxy"
	I0708 19:43:28.508302       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.5"]
	I0708 19:43:28.516101       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0708 19:43:28.516115       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0708 19:43:28.516122       1 server_linux.go:165] "Using iptables Proxier"
	I0708 19:43:28.516705       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0708 19:43:28.516832       1 server.go:872] "Version info" version="v1.30.2"
	I0708 19:43:28.516838       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0708 19:43:28.517447       1 config.go:192] "Starting service config controller"
	I0708 19:43:28.517466       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0708 19:43:28.517525       1 config.go:101] "Starting endpoint slice config controller"
	I0708 19:43:28.517530       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0708 19:43:28.517796       1 config.go:319] "Starting node config controller"
	I0708 19:43:28.518198       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0708 19:43:28.618095       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0708 19:43:28.618123       1 shared_informer.go:320] Caches are synced for service config
	I0708 19:43:28.618242       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6302ef35341b] <==
	I0708 19:44:14.406492       1 serving.go:380] Generated self-signed cert in-memory
	W0708 19:44:15.312943       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0708 19:44:15.312960       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0708 19:44:15.312966       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0708 19:44:15.312969       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0708 19:44:15.339579       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0708 19:44:15.339594       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0708 19:44:15.341275       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0708 19:44:15.342938       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0708 19:44:15.342985       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0708 19:44:15.347033       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0708 19:44:15.448496       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [ed9f0e91126a] <==
	E0708 19:43:12.068934       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0708 19:43:12.068365       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0708 19:43:12.068955       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0708 19:43:12.068385       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0708 19:43:12.068976       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0708 19:43:12.068397       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0708 19:43:12.069003       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0708 19:43:12.068425       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0708 19:43:12.069013       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0708 19:43:12.068441       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0708 19:43:12.069033       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0708 19:43:12.068458       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0708 19:43:12.069050       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0708 19:43:12.068468       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0708 19:43:12.069087       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0708 19:43:12.068628       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0708 19:43:12.069141       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0708 19:43:12.068640       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0708 19:43:12.069171       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0708 19:43:12.978094       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0708 19:43:12.978251       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0708 19:43:12.987481       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0708 19:43:12.987495       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0708 19:43:13.665698       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0708 19:43:38.188521       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 08 19:44:44 ha-881000 kubelet[1395]: E0708 19:44:44.081841    1395 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-rlj9v" podUID="57423cc1-b13f-45c7-b2df-71621270a61f"
	Jul 08 19:44:46 ha-881000 kubelet[1395]: E0708 19:44:46.081392    1395 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-rlj9v" podUID="57423cc1-b13f-45c7-b2df-71621270a61f"
	Jul 08 19:44:46 ha-881000 kubelet[1395]: E0708 19:44:46.081437    1395 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-2646x" podUID="5a1aa968-b181-4318-a7f2-fb0f94617bd5"
	Jul 08 19:44:47 ha-881000 kubelet[1395]: I0708 19:44:47.321286    1395 scope.go:117] "RemoveContainer" containerID="0ae23ac6a69913979208465e09595f104e772632f3254444bde6cc9b187e4cc3"
	Jul 08 19:44:47 ha-881000 kubelet[1395]: I0708 19:44:47.321433    1395 scope.go:117] "RemoveContainer" containerID="b545f59f90f80f0cdf0042b37be15da16017501ae82b914b769f62ea576231fa"
	Jul 08 19:44:47 ha-881000 kubelet[1395]: E0708 19:44:47.321518    1395 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(62d01d4e-c78c-499e-9905-7ff510f1edea)\"" pod="kube-system/storage-provisioner" podUID="62d01d4e-c78c-499e-9905-7ff510f1edea"
	Jul 08 19:44:47 ha-881000 kubelet[1395]: E0708 19:44:47.701938    1395 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 08 19:44:47 ha-881000 kubelet[1395]: E0708 19:44:47.701966    1395 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 08 19:44:47 ha-881000 kubelet[1395]: E0708 19:44:47.701998    1395 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/57423cc1-b13f-45c7-b2df-71621270a61f-config-volume podName:57423cc1-b13f-45c7-b2df-71621270a61f nodeName:}" failed. No retries permitted until 2024-07-08 19:45:19.701983434 +0000 UTC m=+66.689156929 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/57423cc1-b13f-45c7-b2df-71621270a61f-config-volume") pod "coredns-7db6d8ff4d-rlj9v" (UID: "57423cc1-b13f-45c7-b2df-71621270a61f") : object "kube-system"/"coredns" not registered
	Jul 08 19:44:47 ha-881000 kubelet[1395]: E0708 19:44:47.702005    1395 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5a1aa968-b181-4318-a7f2-fb0f94617bd5-config-volume podName:5a1aa968-b181-4318-a7f2-fb0f94617bd5 nodeName:}" failed. No retries permitted until 2024-07-08 19:45:19.702001963 +0000 UTC m=+66.689175500 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5a1aa968-b181-4318-a7f2-fb0f94617bd5-config-volume") pod "coredns-7db6d8ff4d-2646x" (UID: "5a1aa968-b181-4318-a7f2-fb0f94617bd5") : object "kube-system"/"coredns" not registered
	Jul 08 19:44:48 ha-881000 kubelet[1395]: E0708 19:44:48.081943    1395 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-rlj9v" podUID="57423cc1-b13f-45c7-b2df-71621270a61f"
	Jul 08 19:44:48 ha-881000 kubelet[1395]: E0708 19:44:48.081967    1395 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-2646x" podUID="5a1aa968-b181-4318-a7f2-fb0f94617bd5"
	Jul 08 19:44:48 ha-881000 kubelet[1395]: E0708 19:44:48.123228    1395 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	Jul 08 19:44:50 ha-881000 kubelet[1395]: E0708 19:44:50.081719    1395 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-2646x" podUID="5a1aa968-b181-4318-a7f2-fb0f94617bd5"
	Jul 08 19:44:50 ha-881000 kubelet[1395]: E0708 19:44:50.081719    1395 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-rlj9v" podUID="57423cc1-b13f-45c7-b2df-71621270a61f"
	Jul 08 19:44:52 ha-881000 kubelet[1395]: E0708 19:44:52.083834    1395 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-2646x" podUID="5a1aa968-b181-4318-a7f2-fb0f94617bd5"
	Jul 08 19:44:52 ha-881000 kubelet[1395]: E0708 19:44:52.084071    1395 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-rlj9v" podUID="57423cc1-b13f-45c7-b2df-71621270a61f"
	Jul 08 19:45:02 ha-881000 kubelet[1395]: I0708 19:45:02.082351    1395 scope.go:117] "RemoveContainer" containerID="b545f59f90f80f0cdf0042b37be15da16017501ae82b914b769f62ea576231fa"
	Jul 08 19:45:02 ha-881000 kubelet[1395]: E0708 19:45:02.082661    1395 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(62d01d4e-c78c-499e-9905-7ff510f1edea)\"" pod="kube-system/storage-provisioner" podUID="62d01d4e-c78c-499e-9905-7ff510f1edea"
	Jul 08 19:45:13 ha-881000 kubelet[1395]: I0708 19:45:13.082459    1395 scope.go:117] "RemoveContainer" containerID="b545f59f90f80f0cdf0042b37be15da16017501ae82b914b769f62ea576231fa"
	Jul 08 19:45:13 ha-881000 kubelet[1395]: E0708 19:45:13.090432    1395 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 08 19:45:13 ha-881000 kubelet[1395]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 08 19:45:13 ha-881000 kubelet[1395]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 08 19:45:13 ha-881000 kubelet[1395]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 08 19:45:13 ha-881000 kubelet[1395]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [b545f59f90f8] <==
	I0708 19:44:17.284889       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0708 19:44:47.287330       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f496d2b5c569] <==
	I0708 19:45:13.154098       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0708 19:45:13.158957       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0708 19:45:13.159047       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0708 19:45:30.543816       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0708 19:45:30.544192       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-881000_48b3a288-eb17-458c-84d4-bbd1f4131e85!
	I0708 19:45:30.544592       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3bb7994d-1374-425c-b6a5-ded5a8749b0f", APIVersion:"v1", ResourceVersion:"633", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-881000_48b3a288-eb17-458c-84d4-bbd1f4131e85 became leader
	I0708 19:45:30.645581       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-881000_48b3a288-eb17-458c-84d4-bbd1f4131e85!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p ha-881000 -n ha-881000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-881000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (355.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-881000 --control-plane -v=7 --alsologtostderr
E0708 12:45:52.058566    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/addons-443000/client.crt: no such file or directory
E0708 12:47:16.057642    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/functional-183000/client.crt: no such file or directory
E0708 12:47:43.767742    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/functional-183000/client.crt: no such file or directory
E0708 12:50:52.034877    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/addons-443000/client.crt: no such file or directory
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-881000 --control-plane -v=7 --alsologtostderr: exit status 80 (5m41.987725208s)

                                                
                                                
-- stdout --
	* Adding node m02 to cluster ha-881000 as [worker control-plane]
	* Starting "ha-881000-m02" control-plane node in "ha-881000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 12:45:32.390746    2896 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:45:32.390898    2896 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:45:32.390902    2896 out.go:304] Setting ErrFile to fd 2...
	I0708 12:45:32.390904    2896 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:45:32.391063    2896 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:45:32.391317    2896 mustload.go:65] Loading cluster: ha-881000
	I0708 12:45:32.391515    2896 config.go:182] Loaded profile config "ha-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:45:32.392237    2896 host.go:66] Checking if "ha-881000" exists ...
	I0708 12:45:32.392339    2896 api_server.go:166] Checking apiserver status ...
	I0708 12:45:32.392367    2896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 12:45:32.392374    2896 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/id_rsa Username:docker}
	I0708 12:45:32.417623    2896 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1747/cgroup
	W0708 12:45:32.420954    2896 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1747/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0708 12:45:32.420986    2896 ssh_runner.go:195] Run: ls
	I0708 12:45:32.422626    2896 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I0708 12:45:32.425356    2896 api_server.go:279] https://192.168.105.5:8443/healthz returned 200:
	ok
	W0708 12:45:32.425377    2896 out.go:239] X Adding a control-plane node to a non-HA (non-multi-control plane) cluster is not currently supported. Please first delete the cluster and use 'minikube start --ha' to create new one.
	X Adding a control-plane node to a non-HA (non-multi-control plane) cluster is not currently supported. Please first delete the cluster and use 'minikube start --ha' to create new one.
	I0708 12:45:32.428474    2896 out.go:177] * Adding node m02 to cluster ha-881000 as [worker control-plane]
	I0708 12:45:32.431348    2896 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0708 12:45:32.431509    2896 config.go:182] Loaded profile config "ha-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:45:32.431553    2896 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/config.json ...
	I0708 12:45:32.436348    2896 out.go:177] * Starting "ha-881000-m02" control-plane node in "ha-881000" cluster
	I0708 12:45:32.443373    2896 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0708 12:45:32.443394    2896 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0708 12:45:32.443403    2896 cache.go:56] Caching tarball of preloaded images
	I0708 12:45:32.443517    2896 preload.go:173] Found /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0708 12:45:32.443525    2896 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0708 12:45:32.443559    2896 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/config.json ...
	I0708 12:45:32.443933    2896 start.go:360] acquireMachinesLock for ha-881000-m02: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 12:45:32.443972    2896 start.go:364] duration metric: took 29.084µs to acquireMachinesLock for "ha-881000-m02"
	I0708 12:45:32.443983    2896 start.go:93] Provisioning new machine with config: &{Name:ha-881000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.2 ClusterName:ha-881000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:true Worker:true}
	I0708 12:45:32.444079    2896 start.go:125] createHost starting for "m02" (driver="qemu2")
	I0708 12:45:32.447369    2896 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0708 12:45:32.462791    2896 start.go:159] libmachine.API.Create for "ha-881000" (driver="qemu2")
	I0708 12:45:32.462817    2896 client.go:168] LocalClient.Create starting
	I0708 12:45:32.462887    2896 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem
	I0708 12:45:32.462911    2896 main.go:141] libmachine: Decoding PEM data...
	I0708 12:45:32.462921    2896 main.go:141] libmachine: Parsing certificate...
	I0708 12:45:32.462965    2896 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem
	I0708 12:45:32.462988    2896 main.go:141] libmachine: Decoding PEM data...
	I0708 12:45:32.462996    2896 main.go:141] libmachine: Parsing certificate...
	I0708 12:45:32.463330    2896 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19195-1270/.minikube/cache/iso/arm64/minikube-v1.33.1-1720011972-19186-arm64.iso...
	I0708 12:45:32.628125    2896 main.go:141] libmachine: Creating SSH key...
	I0708 12:45:32.858468    2896 main.go:141] libmachine: Creating Disk image...
	I0708 12:45:32.858477    2896 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0708 12:45:32.858765    2896 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000-m02/disk.qcow2.raw /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000-m02/disk.qcow2
	I0708 12:45:32.868693    2896 main.go:141] libmachine: STDOUT: 
	I0708 12:45:32.868712    2896 main.go:141] libmachine: STDERR: 
	I0708 12:45:32.868774    2896 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000-m02/disk.qcow2 +20000M
	I0708 12:45:32.876841    2896 main.go:141] libmachine: STDOUT: Image resized.
	
	I0708 12:45:32.876855    2896 main.go:141] libmachine: STDERR: 
	I0708 12:45:32.876867    2896 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000-m02/disk.qcow2.raw and /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000-m02/disk.qcow2
	I0708 12:45:32.876872    2896 main.go:141] libmachine: Starting QEMU VM...
	I0708 12:45:32.876917    2896 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:c0:8d:92:26:5a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000-m02/disk.qcow2
	I0708 12:45:32.913492    2896 main.go:141] libmachine: STDOUT: 
	I0708 12:45:32.913512    2896 main.go:141] libmachine: STDERR: 
	I0708 12:45:32.913516    2896 main.go:141] libmachine: Attempt 0
	I0708 12:45:32.913526    2896 main.go:141] libmachine: Searching for aa:c0:8d:92:26:5a in /var/db/dhcpd_leases ...
	I0708 12:45:32.913600    2896 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0708 12:45:32.913619    2896 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:de:75:66:b4:8a:80 ID:1,de:75:66:b4:8a:80 Lease:0x668d92ff}
	I0708 12:45:32.913629    2896 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:2a:1:f5:fb:91:b7 ID:1,2a:1:f5:fb:91:b7 Lease:0x668d90ef}
	I0708 12:45:32.913636    2896 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:82:c3:fb:64:cc:2e ID:1,82:c3:fb:64:cc:2e Lease:0x668c3f2e}
	I0708 12:45:32.913642    2896 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:f2:6f:8d:44:21:17 ID:1,f2:6f:8d:44:21:17 Lease:0x668c3efb}
	I0708 12:45:32.913648    2896 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x668d8f48}
	I0708 12:45:34.915694    2896 main.go:141] libmachine: Attempt 1
	I0708 12:45:34.915723    2896 main.go:141] libmachine: Searching for aa:c0:8d:92:26:5a in /var/db/dhcpd_leases ...
	I0708 12:45:34.915850    2896 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0708 12:45:34.915881    2896 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:de:75:66:b4:8a:80 ID:1,de:75:66:b4:8a:80 Lease:0x668d92ff}
	I0708 12:45:34.915887    2896 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:2a:1:f5:fb:91:b7 ID:1,2a:1:f5:fb:91:b7 Lease:0x668d90ef}
	I0708 12:45:34.915892    2896 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:82:c3:fb:64:cc:2e ID:1,82:c3:fb:64:cc:2e Lease:0x668c3f2e}
	I0708 12:45:34.915898    2896 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:f2:6f:8d:44:21:17 ID:1,f2:6f:8d:44:21:17 Lease:0x668c3efb}
	I0708 12:45:34.915903    2896 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x668d8f48}
	I0708 12:45:36.917956    2896 main.go:141] libmachine: Attempt 2
	I0708 12:45:36.917981    2896 main.go:141] libmachine: Searching for aa:c0:8d:92:26:5a in /var/db/dhcpd_leases ...
	I0708 12:45:36.918145    2896 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0708 12:45:36.918157    2896 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:de:75:66:b4:8a:80 ID:1,de:75:66:b4:8a:80 Lease:0x668d92ff}
	I0708 12:45:36.918162    2896 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:2a:1:f5:fb:91:b7 ID:1,2a:1:f5:fb:91:b7 Lease:0x668d90ef}
	I0708 12:45:36.918168    2896 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:82:c3:fb:64:cc:2e ID:1,82:c3:fb:64:cc:2e Lease:0x668c3f2e}
	I0708 12:45:36.918173    2896 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:f2:6f:8d:44:21:17 ID:1,f2:6f:8d:44:21:17 Lease:0x668c3efb}
	I0708 12:45:36.918177    2896 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x668d8f48}
	I0708 12:45:38.920305    2896 main.go:141] libmachine: Attempt 3
	I0708 12:45:38.920359    2896 main.go:141] libmachine: Searching for aa:c0:8d:92:26:5a in /var/db/dhcpd_leases ...
	I0708 12:45:38.920472    2896 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0708 12:45:38.920487    2896 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:de:75:66:b4:8a:80 ID:1,de:75:66:b4:8a:80 Lease:0x668d92ff}
	I0708 12:45:38.920494    2896 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:2a:1:f5:fb:91:b7 ID:1,2a:1:f5:fb:91:b7 Lease:0x668d90ef}
	I0708 12:45:38.920498    2896 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:82:c3:fb:64:cc:2e ID:1,82:c3:fb:64:cc:2e Lease:0x668c3f2e}
	I0708 12:45:38.920503    2896 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:f2:6f:8d:44:21:17 ID:1,f2:6f:8d:44:21:17 Lease:0x668c3efb}
	I0708 12:45:38.920508    2896 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x668d8f48}
	I0708 12:45:40.922555    2896 main.go:141] libmachine: Attempt 4
	I0708 12:45:40.922577    2896 main.go:141] libmachine: Searching for aa:c0:8d:92:26:5a in /var/db/dhcpd_leases ...
	I0708 12:45:40.922736    2896 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0708 12:45:40.922749    2896 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:de:75:66:b4:8a:80 ID:1,de:75:66:b4:8a:80 Lease:0x668d92ff}
	I0708 12:45:40.922754    2896 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:2a:1:f5:fb:91:b7 ID:1,2a:1:f5:fb:91:b7 Lease:0x668d90ef}
	I0708 12:45:40.922759    2896 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:82:c3:fb:64:cc:2e ID:1,82:c3:fb:64:cc:2e Lease:0x668c3f2e}
	I0708 12:45:40.922763    2896 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:f2:6f:8d:44:21:17 ID:1,f2:6f:8d:44:21:17 Lease:0x668c3efb}
	I0708 12:45:40.922778    2896 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x668d8f48}
	I0708 12:45:42.924844    2896 main.go:141] libmachine: Attempt 5
	I0708 12:45:42.924868    2896 main.go:141] libmachine: Searching for aa:c0:8d:92:26:5a in /var/db/dhcpd_leases ...
	I0708 12:45:42.924997    2896 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0708 12:45:42.925025    2896 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:de:75:66:b4:8a:80 ID:1,de:75:66:b4:8a:80 Lease:0x668d92ff}
	I0708 12:45:42.925030    2896 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:2a:1:f5:fb:91:b7 ID:1,2a:1:f5:fb:91:b7 Lease:0x668d90ef}
	I0708 12:45:42.925034    2896 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:82:c3:fb:64:cc:2e ID:1,82:c3:fb:64:cc:2e Lease:0x668c3f2e}
	I0708 12:45:42.925038    2896 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:f2:6f:8d:44:21:17 ID:1,f2:6f:8d:44:21:17 Lease:0x668c3efb}
	I0708 12:45:42.925046    2896 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x668d8f48}
	I0708 12:45:44.927068    2896 main.go:141] libmachine: Attempt 6
	I0708 12:45:44.927095    2896 main.go:141] libmachine: Searching for aa:c0:8d:92:26:5a in /var/db/dhcpd_leases ...
	I0708 12:45:44.927199    2896 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0708 12:45:44.927210    2896 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:de:75:66:b4:8a:80 ID:1,de:75:66:b4:8a:80 Lease:0x668d92ff}
	I0708 12:45:44.927216    2896 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:2a:1:f5:fb:91:b7 ID:1,2a:1:f5:fb:91:b7 Lease:0x668d90ef}
	I0708 12:45:44.927221    2896 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:82:c3:fb:64:cc:2e ID:1,82:c3:fb:64:cc:2e Lease:0x668c3f2e}
	I0708 12:45:44.927227    2896 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:f2:6f:8d:44:21:17 ID:1,f2:6f:8d:44:21:17 Lease:0x668c3efb}
	I0708 12:45:44.927231    2896 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x668d8f48}
	I0708 12:45:46.929254    2896 main.go:141] libmachine: Attempt 7
	I0708 12:45:46.929274    2896 main.go:141] libmachine: Searching for aa:c0:8d:92:26:5a in /var/db/dhcpd_leases ...
	I0708 12:45:46.929401    2896 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0708 12:45:46.929417    2896 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:aa:c0:8d:92:26:5a ID:1,aa:c0:8d:92:26:5a Lease:0x668d9369}
	I0708 12:45:46.929420    2896 main.go:141] libmachine: Found match: aa:c0:8d:92:26:5a
	I0708 12:45:46.929428    2896 main.go:141] libmachine: IP: 192.168.105.6
	I0708 12:45:46.929433    2896 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.6)...
	I0708 12:45:54.947702    2896 machine.go:94] provisionDockerMachine start ...
	I0708 12:45:54.947752    2896 main.go:141] libmachine: Using SSH client type: native
	I0708 12:45:54.948157    2896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c22920] 0x100c25180 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0708 12:45:54.948163    2896 main.go:141] libmachine: About to run SSH command:
	hostname
	I0708 12:45:54.996343    2896 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0708 12:45:54.996359    2896 buildroot.go:166] provisioning hostname "ha-881000-m02"
	I0708 12:45:54.996417    2896 main.go:141] libmachine: Using SSH client type: native
	I0708 12:45:54.996553    2896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c22920] 0x100c25180 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0708 12:45:54.996559    2896 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-881000-m02 && echo "ha-881000-m02" | sudo tee /etc/hostname
	I0708 12:45:55.049377    2896 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-881000-m02
	
	I0708 12:45:55.049433    2896 main.go:141] libmachine: Using SSH client type: native
	I0708 12:45:55.049557    2896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c22920] 0x100c25180 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0708 12:45:55.049565    2896 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-881000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-881000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-881000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 12:45:55.099728    2896 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 12:45:55.099741    2896 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19195-1270/.minikube CaCertPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19195-1270/.minikube}
	I0708 12:45:55.099748    2896 buildroot.go:174] setting up certificates
	I0708 12:45:55.099752    2896 provision.go:84] configureAuth start
	I0708 12:45:55.099798    2896 provision.go:143] copyHostCerts
	I0708 12:45:55.099824    2896 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.pem
	I0708 12:45:55.099880    2896 exec_runner.go:144] found /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.pem, removing ...
	I0708 12:45:55.099884    2896 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.pem
	I0708 12:45:55.100000    2896 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.pem (1078 bytes)
	I0708 12:45:55.100156    2896 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19195-1270/.minikube/cert.pem
	I0708 12:45:55.100180    2896 exec_runner.go:144] found /Users/jenkins/minikube-integration/19195-1270/.minikube/cert.pem, removing ...
	I0708 12:45:55.100183    2896 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19195-1270/.minikube/cert.pem
	I0708 12:45:55.100242    2896 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19195-1270/.minikube/cert.pem (1123 bytes)
	I0708 12:45:55.100328    2896 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19195-1270/.minikube/key.pem
	I0708 12:45:55.100352    2896 exec_runner.go:144] found /Users/jenkins/minikube-integration/19195-1270/.minikube/key.pem, removing ...
	I0708 12:45:55.100368    2896 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19195-1270/.minikube/key.pem
	I0708 12:45:55.100445    2896 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19195-1270/.minikube/key.pem (1675 bytes)
	I0708 12:45:55.100555    2896 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca-key.pem org=jenkins.ha-881000-m02 san=[127.0.0.1 192.168.105.6 ha-881000-m02 localhost minikube]
	I0708 12:45:55.361330    2896 provision.go:177] copyRemoteCerts
	I0708 12:45:55.361371    2896 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 12:45:55.361381    2896 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000-m02/id_rsa Username:docker}
	I0708 12:45:55.390545    2896 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0708 12:45:55.390618    2896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0708 12:45:55.399691    2896 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0708 12:45:55.399743    2896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0708 12:45:55.408164    2896 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0708 12:45:55.408212    2896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 12:45:55.416235    2896 provision.go:87] duration metric: took 316.484916ms to configureAuth
	I0708 12:45:55.416244    2896 buildroot.go:189] setting minikube options for container-runtime
	I0708 12:45:55.417075    2896 config.go:182] Loaded profile config "ha-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:45:55.417126    2896 main.go:141] libmachine: Using SSH client type: native
	I0708 12:45:55.417220    2896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c22920] 0x100c25180 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0708 12:45:55.417227    2896 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0708 12:45:55.464576    2896 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0708 12:45:55.464586    2896 buildroot.go:70] root file system type: tmpfs
	I0708 12:45:55.464637    2896 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0708 12:45:55.464684    2896 main.go:141] libmachine: Using SSH client type: native
	I0708 12:45:55.464804    2896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c22920] 0x100c25180 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0708 12:45:55.464837    2896 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0708 12:45:55.516813    2896 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0708 12:45:55.516869    2896 main.go:141] libmachine: Using SSH client type: native
	I0708 12:45:55.516979    2896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c22920] 0x100c25180 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0708 12:45:55.516987    2896 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0708 12:45:56.826397    2896 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0708 12:45:56.826412    2896 machine.go:97] duration metric: took 1.878743458s to provisionDockerMachine
	I0708 12:45:56.826419    2896 client.go:171] duration metric: took 24.364181166s to LocalClient.Create
	I0708 12:45:56.826434    2896 start.go:167] duration metric: took 24.364227959s to libmachine.API.Create "ha-881000"
	I0708 12:45:56.826439    2896 start.go:293] postStartSetup for "ha-881000-m02" (driver="qemu2")
	I0708 12:45:56.826445    2896 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 12:45:56.826517    2896 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 12:45:56.826528    2896 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000-m02/id_rsa Username:docker}
	I0708 12:45:56.855111    2896 ssh_runner.go:195] Run: cat /etc/os-release
	I0708 12:45:56.856693    2896 info.go:137] Remote host: Buildroot 2023.02.9
	I0708 12:45:56.856701    2896 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19195-1270/.minikube/addons for local assets ...
	I0708 12:45:56.856803    2896 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19195-1270/.minikube/files for local assets ...
	I0708 12:45:56.856920    2896 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem -> 17672.pem in /etc/ssl/certs
	I0708 12:45:56.856926    2896 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem -> /etc/ssl/certs/17672.pem
	I0708 12:45:56.857046    2896 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0708 12:45:56.860308    2896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem --> /etc/ssl/certs/17672.pem (1708 bytes)
	I0708 12:45:56.868723    2896 start.go:296] duration metric: took 42.281ms for postStartSetup
	I0708 12:45:56.869174    2896 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/config.json ...
	I0708 12:45:56.869373    2896 start.go:128] duration metric: took 24.425872958s to createHost
	I0708 12:45:56.869396    2896 main.go:141] libmachine: Using SSH client type: native
	I0708 12:45:56.869481    2896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c22920] 0x100c25180 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0708 12:45:56.869485    2896 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0708 12:45:56.915786    2896 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720467957.114487464
	
	I0708 12:45:56.915795    2896 fix.go:216] guest clock: 1720467957.114487464
	I0708 12:45:56.915799    2896 fix.go:229] Guest: 2024-07-08 12:45:57.114487464 -0700 PDT Remote: 2024-07-08 12:45:56.869377 -0700 PDT m=+24.500051335 (delta=245.110464ms)
	I0708 12:45:56.915810    2896 fix.go:200] guest clock delta is within tolerance: 245.110464ms
	I0708 12:45:56.915817    2896 start.go:83] releasing machines lock for "ha-881000-m02", held for 24.472424584s
	I0708 12:45:56.916166    2896 ssh_runner.go:195] Run: systemctl --version
	I0708 12:45:56.916169    2896 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0708 12:45:56.916174    2896 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000-m02/id_rsa Username:docker}
	I0708 12:45:56.916191    2896 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000-m02/id_rsa Username:docker}
	I0708 12:45:56.941827    2896 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0708 12:45:56.983405    2896 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0708 12:45:56.983457    2896 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0708 12:45:56.990064    2896 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0708 12:45:56.990077    2896 start.go:494] detecting cgroup driver to use...
	I0708 12:45:56.990149    2896 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 12:45:56.996634    2896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0708 12:45:57.000383    2896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0708 12:45:57.004103    2896 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0708 12:45:57.004129    2896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0708 12:45:57.008022    2896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0708 12:45:57.012012    2896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0708 12:45:57.015716    2896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0708 12:45:57.019658    2896 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0708 12:45:57.023301    2896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0708 12:45:57.026870    2896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0708 12:45:57.030729    2896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0708 12:45:57.034696    2896 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 12:45:57.038418    2896 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 12:45:57.041916    2896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 12:45:57.128253    2896 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0708 12:45:57.139750    2896 start.go:494] detecting cgroup driver to use...
	I0708 12:45:57.139822    2896 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0708 12:45:57.147135    2896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 12:45:57.153017    2896 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0708 12:45:57.161249    2896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 12:45:57.166731    2896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0708 12:45:57.172145    2896 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0708 12:45:57.219083    2896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0708 12:45:57.224969    2896 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 12:45:57.231403    2896 ssh_runner.go:195] Run: which cri-dockerd
	I0708 12:45:57.232875    2896 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0708 12:45:57.236169    2896 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0708 12:45:57.242686    2896 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0708 12:45:57.319761    2896 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0708 12:45:57.390345    2896 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0708 12:45:57.390483    2896 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0708 12:45:57.396556    2896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 12:45:57.483242    2896 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0708 12:45:59.613453    2896 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.130244291s)
	I0708 12:45:59.613528    2896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0708 12:45:59.619017    2896 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0708 12:45:59.626094    2896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0708 12:45:59.631668    2896 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0708 12:45:59.722782    2896 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0708 12:45:59.816760    2896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 12:45:59.892577    2896 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0708 12:45:59.899521    2896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0708 12:45:59.905366    2896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 12:45:59.992617    2896 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0708 12:46:00.019179    2896 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0708 12:46:00.019253    2896 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0708 12:46:00.022999    2896 start.go:562] Will wait 60s for crictl version
	I0708 12:46:00.023049    2896 ssh_runner.go:195] Run: which crictl
	I0708 12:46:00.024533    2896 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0708 12:46:00.044765    2896 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0708 12:46:00.044844    2896 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0708 12:46:00.055458    2896 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0708 12:46:00.067300    2896 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0708 12:46:00.067432    2896 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0708 12:46:00.068831    2896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 12:46:00.073139    2896 mustload.go:65] Loading cluster: ha-881000
	I0708 12:46:00.073268    2896 config.go:182] Loaded profile config "ha-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:46:00.073807    2896 host.go:66] Checking if "ha-881000" exists ...
	I0708 12:46:00.073905    2896 certs.go:68] Setting up /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000 for IP: 192.168.105.6
	I0708 12:46:00.073909    2896 certs.go:194] generating shared ca certs ...
	I0708 12:46:00.073915    2896 certs.go:226] acquiring lock for ca certs: {Name:mka13b605a6983b2618b91f3a0bdec43c132a4e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:46:00.074023    2896 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.key
	I0708 12:46:00.074067    2896 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/proxy-client-ca.key
	I0708 12:46:00.074072    2896 certs.go:256] generating profile certs ...
	I0708 12:46:00.074143    2896 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/client.key
	I0708 12:46:00.074157    2896 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.key.73a47d4c
	I0708 12:46:00.074167    2896 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.crt.73a47d4c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.5 192.168.105.6 <nil>]
	I0708 12:46:00.180971    2896 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.crt.73a47d4c ...
	I0708 12:46:00.180977    2896 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.crt.73a47d4c: {Name:mkcf58190853d30d03cb2a2a3e1370be15f96483 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:46:00.181332    2896 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.key.73a47d4c ...
	I0708 12:46:00.181337    2896 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.key.73a47d4c: {Name:mk55a66896974ef5d031f3801624bcd0bf937c27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:46:00.181470    2896 certs.go:381] copying /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.crt.73a47d4c -> /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.crt
	I0708 12:46:00.181608    2896 certs.go:385] copying /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.key.73a47d4c -> /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.key
	I0708 12:46:00.181754    2896 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/proxy-client.key
	I0708 12:46:00.181760    2896 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0708 12:46:00.181772    2896 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0708 12:46:00.181782    2896 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0708 12:46:00.181793    2896 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0708 12:46:00.181805    2896 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0708 12:46:00.181816    2896 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0708 12:46:00.181827    2896 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0708 12:46:00.181838    2896 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0708 12:46:00.181897    2896 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/1767.pem (1338 bytes)
	W0708 12:46:00.181925    2896 certs.go:480] ignoring /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/1767_empty.pem, impossibly tiny 0 bytes
	I0708 12:46:00.181929    2896 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca-key.pem (1679 bytes)
	I0708 12:46:00.181950    2896 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem (1078 bytes)
	I0708 12:46:00.181967    2896 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem (1123 bytes)
	I0708 12:46:00.181985    2896 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/key.pem (1675 bytes)
	I0708 12:46:00.182023    2896 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem (1708 bytes)
	I0708 12:46:00.182043    2896 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0708 12:46:00.182054    2896 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/1767.pem -> /usr/share/ca-certificates/1767.pem
	I0708 12:46:00.182064    2896 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem -> /usr/share/ca-certificates/17672.pem
	I0708 12:46:00.182080    2896 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/id_rsa Username:docker}
	I0708 12:46:00.202551    2896 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0708 12:46:00.204301    2896 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0708 12:46:00.209271    2896 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0708 12:46:00.210738    2896 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0708 12:46:00.214989    2896 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0708 12:46:00.216442    2896 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0708 12:46:00.220226    2896 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0708 12:46:00.221694    2896 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0708 12:46:00.225665    2896 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0708 12:46:00.227183    2896 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0708 12:46:00.231569    2896 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0708 12:46:00.233088    2896 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0708 12:46:00.237108    2896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 12:46:00.246187    2896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0708 12:46:00.254688    2896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 12:46:00.263329    2896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 12:46:00.271875    2896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0708 12:46:00.280456    2896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0708 12:46:00.288690    2896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 12:46:00.297000    2896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/ha-881000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0708 12:46:00.305564    2896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 12:46:00.314207    2896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/1767.pem --> /usr/share/ca-certificates/1767.pem (1338 bytes)
	I0708 12:46:00.322602    2896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem --> /usr/share/ca-certificates/17672.pem (1708 bytes)
	I0708 12:46:00.331386    2896 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0708 12:46:00.337699    2896 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0708 12:46:00.343915    2896 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0708 12:46:00.349941    2896 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0708 12:46:00.355823    2896 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0708 12:46:00.362153    2896 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0708 12:46:00.367980    2896 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0708 12:46:00.374176    2896 ssh_runner.go:195] Run: openssl version
	I0708 12:46:00.376275    2896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 12:46:00.380331    2896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 12:46:00.382008    2896 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 12:46:00.382029    2896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 12:46:00.384083    2896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 12:46:00.388376    2896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1767.pem && ln -fs /usr/share/ca-certificates/1767.pem /etc/ssl/certs/1767.pem"
	I0708 12:46:00.392420    2896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1767.pem
	I0708 12:46:00.394205    2896 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  8 19:34 /usr/share/ca-certificates/1767.pem
	I0708 12:46:00.394228    2896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1767.pem
	I0708 12:46:00.396293    2896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1767.pem /etc/ssl/certs/51391683.0"
	I0708 12:46:00.400470    2896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17672.pem && ln -fs /usr/share/ca-certificates/17672.pem /etc/ssl/certs/17672.pem"
	I0708 12:46:00.404435    2896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17672.pem
	I0708 12:46:00.406045    2896 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  8 19:34 /usr/share/ca-certificates/17672.pem
	I0708 12:46:00.406063    2896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17672.pem
	I0708 12:46:00.408162    2896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17672.pem /etc/ssl/certs/3ec20f2e.0"
	I0708 12:46:00.412042    2896 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 12:46:00.413551    2896 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0708 12:46:00.413588    2896 kubeadm.go:928] updating node {m02 192.168.105.6 8443 v1.30.2  true true} ...
	I0708 12:46:00.413640    2896 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-881000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-881000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0708 12:46:00.413650    2896 kube-vip.go:115] generating kube-vip config ...
	I0708 12:46:00.413666    2896 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0708 12:46:00.421004    2896 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0708 12:46:00.421066    2896 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0708 12:46:00.421112    2896 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0708 12:46:00.425609    2896 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0708 12:46:00.425639    2896 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0708 12:46:00.430478    2896 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/arm64/kubelet.sha256 -> /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/linux/arm64/v1.30.2/kubelet
	I0708 12:46:00.430478    2896 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/arm64/kubeadm.sha256 -> /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/linux/arm64/v1.30.2/kubeadm
	I0708 12:46:00.430478    2896 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/linux/arm64/v1.30.2/kubectl
	I0708 12:46:04.505247    2896 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/linux/arm64/v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0708 12:46:04.505333    2896 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0708 12:46:04.507495    2896 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0708 12:46:04.507511    2896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/linux/arm64/v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (49938584 bytes)
	I0708 12:46:08.535524    2896 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/linux/arm64/v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0708 12:46:08.535597    2896 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0708 12:46:08.537755    2896 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0708 12:46:08.537772    2896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/linux/arm64/v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (48955544 bytes)
	I0708 12:46:13.523115    2896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 12:46:13.529480    2896 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/linux/arm64/v1.30.2/kubelet -> /var/lib/minikube/binaries/v1.30.2/kubelet
	I0708 12:46:13.529561    2896 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.30.2/kubelet
	I0708 12:46:13.531019    2896 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.30.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubelet': No such file or directory
	I0708 12:46:13.531033    2896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/linux/arm64/v1.30.2/kubelet --> /var/lib/minikube/binaries/v1.30.2/kubelet (96463128 bytes)
	I0708 12:46:14.043849    2896 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0708 12:46:14.047171    2896 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0708 12:46:14.053189    2896 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 12:46:14.059315    2896 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1427 bytes)
	I0708 12:46:14.065275    2896 ssh_runner.go:195] Run: grep <nil>	control-plane.minikube.internal$ /etc/hosts
	I0708 12:46:14.066658    2896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "<nil>	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 12:46:14.070913    2896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 12:46:14.148263    2896 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 12:46:14.155634    2896 host.go:66] Checking if "ha-881000" exists ...
	I0708 12:46:14.155809    2896 start.go:316] joinCluster: &{Name:ha-881000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cluste
rName:ha-881000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 12:46:14.155849    2896 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0708 12:46:14.155858    2896 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/ha-881000/id_rsa Username:docker}
	I0708 12:46:14.206836    2896 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:true Worker:true}
	I0708 12:46:14.206867    2896 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lqiws2.fb37087xc7oivlv3 --discovery-token-ca-cert-hash sha256:230a71526e00c18db9a0775e630de2fb59560bfeed9e976d05ee095d6c2f986e --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-881000-m02 --control-plane --apiserver-advertise-address=192.168.105.6 --apiserver-bind-port=8443"
	I0708 12:51:14.271960    2896 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lqiws2.fb37087xc7oivlv3 --discovery-token-ca-cert-hash sha256:230a71526e00c18db9a0775e630de2fb59560bfeed9e976d05ee095d6c2f986e --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-881000-m02 --control-plane --apiserver-advertise-address=192.168.105.6 --apiserver-bind-port=8443": (5m0.089690125s)
	E0708 12:51:14.272005    2896 start.go:344] control-plane node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lqiws2.fb37087xc7oivlv3 --discovery-token-ca-cert-hash sha256:230a71526e00c18db9a0775e630de2fb59560bfeed9e976d05ee095d6c2f986e --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-881000-m02 --control-plane --apiserver-advertise-address=192.168.105.6 --apiserver-bind-port=8443": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: couldn't validate the identity of the API Server: failed to request the cluster-info ConfigMap: client rate limiter Wait returned an error: context deadline exceeded
	To see the stack trace of this error execute with --v=5 or higher
	I0708 12:51:14.272017    2896 start.go:347] resetting control-plane node "m02" before attempting to rejoin cluster...
	I0708 12:51:14.272027    2896 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --force"
	I0708 12:51:14.306345    2896 start.go:351] successfully reset control-plane node "m02"
	I0708 12:51:14.306386    2896 start.go:318] duration metric: took 5m0.175198625s to joinCluster
	I0708 12:51:14.310848    2896 out.go:177] 
	W0708 12:51:14.313760    2896 out.go:239] X Exiting due to GUEST_NODE_ADD: failed to add node: join node to cluster: error joining control-plane node "m02" to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lqiws2.fb37087xc7oivlv3 --discovery-token-ca-cert-hash sha256:230a71526e00c18db9a0775e630de2fb59560bfeed9e976d05ee095d6c2f986e --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-881000-m02 --control-plane --apiserver-advertise-address=192.168.105.6 --apiserver-bind-port=8443": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: couldn't validate the identity of the API Server: failed to request the cluster-info ConfigMap: client rate limiter Wait returned an error: context deadline exceeded
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_NODE_ADD: failed to add node: join node to cluster: error joining control-plane node "m02" to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lqiws2.fb37087xc7oivlv3 --discovery-token-ca-cert-hash sha256:230a71526e00c18db9a0775e630de2fb59560bfeed9e976d05ee095d6c2f986e --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-881000-m02 --control-plane --apiserver-advertise-address=192.168.105.6 --apiserver-bind-port=8443": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: couldn't validate the identity of the API Server: failed to request the cluster-info ConfigMap: client rate limiter Wait returned an error: context deadline exceeded
	To see the stack trace of this error execute with --v=5 or higher
	
	W0708 12:51:14.313766    2896 out.go:239] * 
	* 
	W0708 12:51:14.315309    2896 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 12:51:14.318704    2896 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-881000 --control-plane -v=7 --alsologtostderr" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000: exit status 6 (13.873538208s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0708 12:51:14.396107    3039 status.go:417] kubeconfig endpoint: empty host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ha-881000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (355.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (14.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1.2921825s)
ha_test.go:304: expected profile "ha-881000" in json of 'profile list' to include 4 nodes but have 2 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-881000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-881000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-881000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":{\"default-storageclass\":true,\"storage-provisioner\":true},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":f
alse,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-881000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-881000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-881000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-881000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"KubernetesVersion\"
:\"v1.30.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":{\"default-storageclass\":true,\"storage-provisioner\":true},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOpti
mizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000: exit status 6 (13.304948542s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0708 12:51:29.562538    3056 status.go:417] kubeconfig endpoint: empty host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ha-881000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (14.60s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.96s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-095000
image_test.go:105: failed to pass build-args with args: "out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-095000" : 
-- stdout --
	Sending build context to Docker daemon  2.048kB
	Step 1/5 : FROM gcr.io/google-containers/alpine-with-bash:1.0
	 ---> 822c13824dc2
	Step 2/5 : ARG ENV_A
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 68630af928b6
	 ---> Removed intermediate container 68630af928b6
	 ---> 171a37da6456
	Step 3/5 : ARG ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 4b27feb0f297
	 ---> Removed intermediate container 4b27feb0f297
	 ---> cfd2130dd50b
	Step 4/5 : RUN echo "test-build-arg" $ENV_A $ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in ae7e14d84c70
	exec /bin/sh: exec format error
	

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            Install the buildx component to build images with BuildKit:
	            https://docs.docker.com/go/buildx/
	
	The command '/bin/sh -c echo "test-build-arg" $ENV_A $ENV_B' returned a non-zero code: 1

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-095000 -n image-095000
helpers_test.go:244: <<< TestImageBuild/serial/BuildWithBuildArg FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestImageBuild/serial/BuildWithBuildArg]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p image-095000 logs -n 25
helpers_test.go:252: TestImageBuild/serial/BuildWithBuildArg logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------|--------------|---------|---------|---------------------|---------------------|
	| Command |                   Args                   |   Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------|--------------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-881000 -- get pods -o              | ha-881000    | jenkins | v1.33.1 | 08 Jul 24 12:39 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'      |              |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o              | ha-881000    | jenkins | v1.33.1 | 08 Jul 24 12:39 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'      |              |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o              | ha-881000    | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'      |              |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o              | ha-881000    | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'      |              |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o              | ha-881000    | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'      |              |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o              | ha-881000    | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | jsonpath='{.items[*].metadata.name}'     |              |         |         |                     |                     |
	| kubectl | -p ha-881000 -- exec  --                 | ha-881000    | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | nslookup kubernetes.io                   |              |         |         |                     |                     |
	| kubectl | -p ha-881000 -- exec  --                 | ha-881000    | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | nslookup kubernetes.default              |              |         |         |                     |                     |
	| kubectl | -p ha-881000 -- exec  -- nslookup        | ha-881000    | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | kubernetes.default.svc.cluster.local     |              |         |         |                     |                     |
	| kubectl | -p ha-881000 -- get pods -o              | ha-881000    | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | jsonpath='{.items[*].metadata.name}'     |              |         |         |                     |                     |
	| node    | add -p ha-881000 -v=7                    | ha-881000    | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | --alsologtostderr                        |              |         |         |                     |                     |
	| node    | ha-881000 node stop m02 -v=7             | ha-881000    | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | --alsologtostderr                        |              |         |         |                     |                     |
	| node    | ha-881000 node start m02 -v=7            | ha-881000    | jenkins | v1.33.1 | 08 Jul 24 12:40 PDT |                     |
	|         | --alsologtostderr                        |              |         |         |                     |                     |
	| node    | list -p ha-881000 -v=7                   | ha-881000    | jenkins | v1.33.1 | 08 Jul 24 12:41 PDT |                     |
	|         | --alsologtostderr                        |              |         |         |                     |                     |
	| stop    | -p ha-881000 -v=7                        | ha-881000    | jenkins | v1.33.1 | 08 Jul 24 12:41 PDT | 08 Jul 24 12:42 PDT |
	|         | --alsologtostderr                        |              |         |         |                     |                     |
	| start   | -p ha-881000 --wait=true -v=7            | ha-881000    | jenkins | v1.33.1 | 08 Jul 24 12:42 PDT | 08 Jul 24 12:43 PDT |
	|         | --alsologtostderr                        |              |         |         |                     |                     |
	| node    | list -p ha-881000                        | ha-881000    | jenkins | v1.33.1 | 08 Jul 24 12:43 PDT |                     |
	| node    | ha-881000 node delete m03 -v=7           | ha-881000    | jenkins | v1.33.1 | 08 Jul 24 12:43 PDT |                     |
	|         | --alsologtostderr                        |              |         |         |                     |                     |
	| stop    | ha-881000 stop -v=7                      | ha-881000    | jenkins | v1.33.1 | 08 Jul 24 12:43 PDT | 08 Jul 24 12:43 PDT |
	|         | --alsologtostderr                        |              |         |         |                     |                     |
	| start   | -p ha-881000 --wait=true                 | ha-881000    | jenkins | v1.33.1 | 08 Jul 24 12:43 PDT | 08 Jul 24 12:45 PDT |
	|         | -v=7 --alsologtostderr                   |              |         |         |                     |                     |
	|         | --driver=qemu2                           |              |         |         |                     |                     |
	| node    | add -p ha-881000                         | ha-881000    | jenkins | v1.33.1 | 08 Jul 24 12:45 PDT |                     |
	|         | --control-plane -v=7                     |              |         |         |                     |                     |
	|         | --alsologtostderr                        |              |         |         |                     |                     |
	| delete  | -p ha-881000                             | ha-881000    | jenkins | v1.33.1 | 08 Jul 24 12:51 PDT | 08 Jul 24 12:51 PDT |
	| start   | -p image-095000 --driver=qemu2           | image-095000 | jenkins | v1.33.1 | 08 Jul 24 12:51 PDT | 08 Jul 24 12:52 PDT |
	|         |                                          |              |         |         |                     |                     |
	| image   | build -t aaa:latest                      | image-095000 | jenkins | v1.33.1 | 08 Jul 24 12:52 PDT | 08 Jul 24 12:52 PDT |
	|         | ./testdata/image-build/test-normal       |              |         |         |                     |                     |
	|         | -p image-095000                          |              |         |         |                     |                     |
	| image   | build -t aaa:latest                      | image-095000 | jenkins | v1.33.1 | 08 Jul 24 12:52 PDT | 08 Jul 24 12:52 PDT |
	|         | --build-opt=build-arg=ENV_A=test_env_str |              |         |         |                     |                     |
	|         | --build-opt=no-cache                     |              |         |         |                     |                     |
	|         | ./testdata/image-build/test-arg -p       |              |         |         |                     |                     |
	|         | image-095000                             |              |         |         |                     |                     |
	|---------|------------------------------------------|--------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/08 12:51:54
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0708 12:51:54.663708    3076 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:51:54.663850    3076 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:51:54.663857    3076 out.go:304] Setting ErrFile to fd 2...
	I0708 12:51:54.663859    3076 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:51:54.663988    3076 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:51:54.665061    3076 out.go:298] Setting JSON to false
	I0708 12:51:54.682523    3076 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3082,"bootTime":1720465232,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0708 12:51:54.682593    3076 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0708 12:51:54.688121    3076 out.go:177] * [image-095000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0708 12:51:54.696038    3076 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 12:51:54.696087    3076 notify.go:220] Checking for updates...
	I0708 12:51:54.703978    3076 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 12:51:54.707062    3076 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0708 12:51:54.710008    3076 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 12:51:54.713024    3076 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	I0708 12:51:54.716039    3076 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 12:51:54.717556    3076 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 12:51:54.722095    3076 out.go:177] * Using the qemu2 driver based on user configuration
	I0708 12:51:54.728899    3076 start.go:297] selected driver: qemu2
	I0708 12:51:54.728901    3076 start.go:901] validating driver "qemu2" against <nil>
	I0708 12:51:54.728906    3076 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 12:51:54.728958    3076 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0708 12:51:54.731970    3076 out.go:177] * Automatically selected the socket_vmnet network
	I0708 12:51:54.742619    3076 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0708 12:51:54.742712    3076 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0708 12:51:54.742748    3076 cni.go:84] Creating CNI manager for ""
	I0708 12:51:54.742755    3076 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0708 12:51:54.742759    3076 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0708 12:51:54.742815    3076 start.go:340] cluster config:
	{Name:image-095000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:image-095000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 12:51:54.746819    3076 iso.go:125] acquiring lock: {Name:mk0270d312faa6a295feea241390baaf586d8510 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 12:51:54.754038    3076 out.go:177] * Starting "image-095000" primary control-plane node in "image-095000" cluster
	I0708 12:51:54.758011    3076 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0708 12:51:54.758022    3076 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0708 12:51:54.758028    3076 cache.go:56] Caching tarball of preloaded images
	I0708 12:51:54.758080    3076 preload.go:173] Found /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0708 12:51:54.758083    3076 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0708 12:51:54.758271    3076 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/image-095000/config.json ...
	I0708 12:51:54.758280    3076 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/image-095000/config.json: {Name:mk0dedaef21acbdc8863c0c10a14b2056d4349e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:51:54.758631    3076 start.go:360] acquireMachinesLock for image-095000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 12:51:54.758664    3076 start.go:364] duration metric: took 29.875µs to acquireMachinesLock for "image-095000"
	I0708 12:51:54.758673    3076 start.go:93] Provisioning new machine with config: &{Name:image-095000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:image-095000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 12:51:54.758703    3076 start.go:125] createHost starting for "" (driver="qemu2")
	I0708 12:51:54.767058    3076 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0708 12:51:54.790980    3076 start.go:159] libmachine.API.Create for "image-095000" (driver="qemu2")
	I0708 12:51:54.791005    3076 client.go:168] LocalClient.Create starting
	I0708 12:51:54.791075    3076 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem
	I0708 12:51:54.791104    3076 main.go:141] libmachine: Decoding PEM data...
	I0708 12:51:54.791111    3076 main.go:141] libmachine: Parsing certificate...
	I0708 12:51:54.791149    3076 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem
	I0708 12:51:54.791170    3076 main.go:141] libmachine: Decoding PEM data...
	I0708 12:51:54.791179    3076 main.go:141] libmachine: Parsing certificate...
	I0708 12:51:54.791618    3076 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19195-1270/.minikube/cache/iso/arm64/minikube-v1.33.1-1720011972-19186-arm64.iso...
	I0708 12:51:54.968824    3076 main.go:141] libmachine: Creating SSH key...
	I0708 12:51:55.128928    3076 main.go:141] libmachine: Creating Disk image...
	I0708 12:51:55.128933    3076 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0708 12:51:55.129136    3076 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/image-095000/disk.qcow2.raw /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/image-095000/disk.qcow2
	I0708 12:51:55.138965    3076 main.go:141] libmachine: STDOUT: 
	I0708 12:51:55.138981    3076 main.go:141] libmachine: STDERR: 
	I0708 12:51:55.139023    3076 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/image-095000/disk.qcow2 +20000M
	I0708 12:51:55.146973    3076 main.go:141] libmachine: STDOUT: Image resized.
	
	I0708 12:51:55.146985    3076 main.go:141] libmachine: STDERR: 
	I0708 12:51:55.146994    3076 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/image-095000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/image-095000/disk.qcow2
	I0708 12:51:55.146997    3076 main.go:141] libmachine: Starting QEMU VM...
	I0708 12:51:55.147026    3076 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/image-095000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/image-095000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/image-095000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:3e:c6:c7:6b:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/image-095000/disk.qcow2
	I0708 12:51:55.183985    3076 main.go:141] libmachine: STDOUT: 
	I0708 12:51:55.184007    3076 main.go:141] libmachine: STDERR: 
	I0708 12:51:55.184010    3076 main.go:141] libmachine: Attempt 0
	I0708 12:51:55.184021    3076 main.go:141] libmachine: Searching for 9a:3e:c6:c7:6b:b2 in /var/db/dhcpd_leases ...
	I0708 12:51:55.184101    3076 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0708 12:51:55.184120    3076 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x668d94c1}
	I0708 12:51:55.184126    3076 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:aa:c0:8d:92:26:5a ID:1,aa:c0:8d:92:26:5a Lease:0x668d9369}
	I0708 12:51:55.184130    3076 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:de:75:66:b4:8a:80 ID:1,de:75:66:b4:8a:80 Lease:0x668d92ff}
	I0708 12:51:55.184134    3076 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:2a:1:f5:fb:91:b7 ID:1,2a:1:f5:fb:91:b7 Lease:0x668d90ef}
	I0708 12:51:55.184138    3076 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:82:c3:fb:64:cc:2e ID:1,82:c3:fb:64:cc:2e Lease:0x668c3f2e}
	I0708 12:51:55.184142    3076 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:f2:6f:8d:44:21:17 ID:1,f2:6f:8d:44:21:17 Lease:0x668c3efb}
	I0708 12:51:57.186291    3076 main.go:141] libmachine: Attempt 1
	I0708 12:51:57.186342    3076 main.go:141] libmachine: Searching for 9a:3e:c6:c7:6b:b2 in /var/db/dhcpd_leases ...
	I0708 12:51:57.186728    3076 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0708 12:51:57.186772    3076 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x668d94c1}
	I0708 12:51:57.186796    3076 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:aa:c0:8d:92:26:5a ID:1,aa:c0:8d:92:26:5a Lease:0x668d9369}
	I0708 12:51:57.186817    3076 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:de:75:66:b4:8a:80 ID:1,de:75:66:b4:8a:80 Lease:0x668d92ff}
	I0708 12:51:57.186839    3076 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:2a:1:f5:fb:91:b7 ID:1,2a:1:f5:fb:91:b7 Lease:0x668d90ef}
	I0708 12:51:57.186859    3076 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:82:c3:fb:64:cc:2e ID:1,82:c3:fb:64:cc:2e Lease:0x668c3f2e}
	I0708 12:51:57.186879    3076 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:f2:6f:8d:44:21:17 ID:1,f2:6f:8d:44:21:17 Lease:0x668c3efb}
	I0708 12:51:59.189036    3076 main.go:141] libmachine: Attempt 2
	I0708 12:51:59.189083    3076 main.go:141] libmachine: Searching for 9a:3e:c6:c7:6b:b2 in /var/db/dhcpd_leases ...
	I0708 12:51:59.189634    3076 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0708 12:51:59.189716    3076 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x668d94c1}
	I0708 12:51:59.189763    3076 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:aa:c0:8d:92:26:5a ID:1,aa:c0:8d:92:26:5a Lease:0x668d9369}
	I0708 12:51:59.189788    3076 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:de:75:66:b4:8a:80 ID:1,de:75:66:b4:8a:80 Lease:0x668d92ff}
	I0708 12:51:59.189812    3076 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:2a:1:f5:fb:91:b7 ID:1,2a:1:f5:fb:91:b7 Lease:0x668d90ef}
	I0708 12:51:59.189836    3076 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:82:c3:fb:64:cc:2e ID:1,82:c3:fb:64:cc:2e Lease:0x668c3f2e}
	I0708 12:51:59.189860    3076 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:f2:6f:8d:44:21:17 ID:1,f2:6f:8d:44:21:17 Lease:0x668c3efb}
	I0708 12:52:01.192045    3076 main.go:141] libmachine: Attempt 3
	I0708 12:52:01.192072    3076 main.go:141] libmachine: Searching for 9a:3e:c6:c7:6b:b2 in /var/db/dhcpd_leases ...
	I0708 12:52:01.192153    3076 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0708 12:52:01.192164    3076 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x668d94c1}
	I0708 12:52:01.192171    3076 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:aa:c0:8d:92:26:5a ID:1,aa:c0:8d:92:26:5a Lease:0x668d9369}
	I0708 12:52:01.192174    3076 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:de:75:66:b4:8a:80 ID:1,de:75:66:b4:8a:80 Lease:0x668d92ff}
	I0708 12:52:01.192184    3076 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:2a:1:f5:fb:91:b7 ID:1,2a:1:f5:fb:91:b7 Lease:0x668d90ef}
	I0708 12:52:01.192188    3076 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:82:c3:fb:64:cc:2e ID:1,82:c3:fb:64:cc:2e Lease:0x668c3f2e}
	I0708 12:52:01.192192    3076 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:f2:6f:8d:44:21:17 ID:1,f2:6f:8d:44:21:17 Lease:0x668c3efb}
	I0708 12:52:03.194198    3076 main.go:141] libmachine: Attempt 4
	I0708 12:52:03.194212    3076 main.go:141] libmachine: Searching for 9a:3e:c6:c7:6b:b2 in /var/db/dhcpd_leases ...
	I0708 12:52:03.194255    3076 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0708 12:52:03.194262    3076 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x668d94c1}
	I0708 12:52:03.194266    3076 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:aa:c0:8d:92:26:5a ID:1,aa:c0:8d:92:26:5a Lease:0x668d9369}
	I0708 12:52:03.194270    3076 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:de:75:66:b4:8a:80 ID:1,de:75:66:b4:8a:80 Lease:0x668d92ff}
	I0708 12:52:03.194273    3076 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:2a:1:f5:fb:91:b7 ID:1,2a:1:f5:fb:91:b7 Lease:0x668d90ef}
	I0708 12:52:03.194277    3076 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:82:c3:fb:64:cc:2e ID:1,82:c3:fb:64:cc:2e Lease:0x668c3f2e}
	I0708 12:52:03.194280    3076 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:f2:6f:8d:44:21:17 ID:1,f2:6f:8d:44:21:17 Lease:0x668c3efb}
	I0708 12:52:05.196259    3076 main.go:141] libmachine: Attempt 5
	I0708 12:52:05.196263    3076 main.go:141] libmachine: Searching for 9a:3e:c6:c7:6b:b2 in /var/db/dhcpd_leases ...
	I0708 12:52:05.196305    3076 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0708 12:52:05.196310    3076 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x668d94c1}
	I0708 12:52:05.196322    3076 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:aa:c0:8d:92:26:5a ID:1,aa:c0:8d:92:26:5a Lease:0x668d9369}
	I0708 12:52:05.196332    3076 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:de:75:66:b4:8a:80 ID:1,de:75:66:b4:8a:80 Lease:0x668d92ff}
	I0708 12:52:05.196337    3076 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:2a:1:f5:fb:91:b7 ID:1,2a:1:f5:fb:91:b7 Lease:0x668d90ef}
	I0708 12:52:05.196341    3076 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:82:c3:fb:64:cc:2e ID:1,82:c3:fb:64:cc:2e Lease:0x668c3f2e}
	I0708 12:52:05.196344    3076 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:f2:6f:8d:44:21:17 ID:1,f2:6f:8d:44:21:17 Lease:0x668c3efb}
	I0708 12:52:07.198368    3076 main.go:141] libmachine: Attempt 6
	I0708 12:52:07.198379    3076 main.go:141] libmachine: Searching for 9a:3e:c6:c7:6b:b2 in /var/db/dhcpd_leases ...
	I0708 12:52:07.198464    3076 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0708 12:52:07.198484    3076 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x668d94c1}
	I0708 12:52:07.198489    3076 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:aa:c0:8d:92:26:5a ID:1,aa:c0:8d:92:26:5a Lease:0x668d9369}
	I0708 12:52:07.198493    3076 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:de:75:66:b4:8a:80 ID:1,de:75:66:b4:8a:80 Lease:0x668d92ff}
	I0708 12:52:07.198496    3076 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:2a:1:f5:fb:91:b7 ID:1,2a:1:f5:fb:91:b7 Lease:0x668d90ef}
	I0708 12:52:07.198500    3076 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:82:c3:fb:64:cc:2e ID:1,82:c3:fb:64:cc:2e Lease:0x668c3f2e}
	I0708 12:52:07.198504    3076 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:f2:6f:8d:44:21:17 ID:1,f2:6f:8d:44:21:17 Lease:0x668c3efb}
	I0708 12:52:09.200667    3076 main.go:141] libmachine: Attempt 7
	I0708 12:52:09.200712    3076 main.go:141] libmachine: Searching for 9a:3e:c6:c7:6b:b2 in /var/db/dhcpd_leases ...
	I0708 12:52:09.201154    3076 main.go:141] libmachine: Found 7 entries in /var/db/dhcpd_leases!
	I0708 12:52:09.201200    3076 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.7 HWAddress:9a:3e:c6:c7:6b:b2 ID:1,9a:3e:c6:c7:6b:b2 Lease:0x668d94e7}
	I0708 12:52:09.201210    3076 main.go:141] libmachine: Found match: 9a:3e:c6:c7:6b:b2
	I0708 12:52:09.201248    3076 main.go:141] libmachine: IP: 192.168.105.7
	I0708 12:52:09.201262    3076 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.7)...
	I0708 12:52:12.224954    3076 machine.go:94] provisionDockerMachine start ...
	I0708 12:52:12.225201    3076 main.go:141] libmachine: Using SSH client type: native
	I0708 12:52:12.225770    3076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104eea920] 0x104eed180 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0708 12:52:12.225798    3076 main.go:141] libmachine: About to run SSH command:
	hostname
	I0708 12:52:12.296702    3076 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0708 12:52:12.296724    3076 buildroot.go:166] provisioning hostname "image-095000"
	I0708 12:52:12.296820    3076 main.go:141] libmachine: Using SSH client type: native
	I0708 12:52:12.297059    3076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104eea920] 0x104eed180 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0708 12:52:12.297064    3076 main.go:141] libmachine: About to run SSH command:
	sudo hostname image-095000 && echo "image-095000" | sudo tee /etc/hostname
	I0708 12:52:12.358876    3076 main.go:141] libmachine: SSH cmd err, output: <nil>: image-095000
	
	I0708 12:52:12.358945    3076 main.go:141] libmachine: Using SSH client type: native
	I0708 12:52:12.359099    3076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104eea920] 0x104eed180 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0708 12:52:12.359107    3076 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\simage-095000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 image-095000/g' /etc/hosts;
				else 
					echo '127.0.1.1 image-095000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 12:52:12.410588    3076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 12:52:12.410597    3076 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19195-1270/.minikube CaCertPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19195-1270/.minikube}
	I0708 12:52:12.410608    3076 buildroot.go:174] setting up certificates
	I0708 12:52:12.410614    3076 provision.go:84] configureAuth start
	I0708 12:52:12.410617    3076 provision.go:143] copyHostCerts
	I0708 12:52:12.410694    3076 exec_runner.go:144] found /Users/jenkins/minikube-integration/19195-1270/.minikube/key.pem, removing ...
	I0708 12:52:12.410698    3076 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19195-1270/.minikube/key.pem
	I0708 12:52:12.410829    3076 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19195-1270/.minikube/key.pem (1675 bytes)
	I0708 12:52:12.411034    3076 exec_runner.go:144] found /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.pem, removing ...
	I0708 12:52:12.411036    3076 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.pem
	I0708 12:52:12.411103    3076 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.pem (1078 bytes)
	I0708 12:52:12.411224    3076 exec_runner.go:144] found /Users/jenkins/minikube-integration/19195-1270/.minikube/cert.pem, removing ...
	I0708 12:52:12.411226    3076 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19195-1270/.minikube/cert.pem
	I0708 12:52:12.411288    3076 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19195-1270/.minikube/cert.pem (1123 bytes)
	I0708 12:52:12.411388    3076 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca-key.pem org=jenkins.image-095000 san=[127.0.0.1 192.168.105.7 image-095000 localhost minikube]
	I0708 12:52:12.605762    3076 provision.go:177] copyRemoteCerts
	I0708 12:52:12.605808    3076 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 12:52:12.605817    3076 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/image-095000/id_rsa Username:docker}
	I0708 12:52:12.633536    3076 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 12:52:12.641815    3076 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0708 12:52:12.649945    3076 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0708 12:52:12.658936    3076 provision.go:87] duration metric: took 248.322333ms to configureAuth
	I0708 12:52:12.658944    3076 buildroot.go:189] setting minikube options for container-runtime
	I0708 12:52:12.659065    3076 config.go:182] Loaded profile config "image-095000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:52:12.659107    3076 main.go:141] libmachine: Using SSH client type: native
	I0708 12:52:12.659202    3076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104eea920] 0x104eed180 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0708 12:52:12.659205    3076 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0708 12:52:12.706049    3076 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0708 12:52:12.706053    3076 buildroot.go:70] root file system type: tmpfs
	I0708 12:52:12.706100    3076 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0708 12:52:12.706139    3076 main.go:141] libmachine: Using SSH client type: native
	I0708 12:52:12.706242    3076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104eea920] 0x104eed180 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0708 12:52:12.706273    3076 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0708 12:52:12.757088    3076 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0708 12:52:12.757126    3076 main.go:141] libmachine: Using SSH client type: native
	I0708 12:52:12.757234    3076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104eea920] 0x104eed180 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0708 12:52:12.757240    3076 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0708 12:52:14.112572    3076 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0708 12:52:14.112581    3076 machine.go:97] duration metric: took 1.887667084s to provisionDockerMachine
	I0708 12:52:14.112586    3076 client.go:171] duration metric: took 19.322131s to LocalClient.Create
	I0708 12:52:14.112601    3076 start.go:167] duration metric: took 19.322177208s to libmachine.API.Create "image-095000"
	I0708 12:52:14.112607    3076 start.go:293] postStartSetup for "image-095000" (driver="qemu2")
	I0708 12:52:14.112612    3076 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 12:52:14.112684    3076 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 12:52:14.112691    3076 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/image-095000/id_rsa Username:docker}
	I0708 12:52:14.139211    3076 ssh_runner.go:195] Run: cat /etc/os-release
	I0708 12:52:14.141213    3076 info.go:137] Remote host: Buildroot 2023.02.9
	I0708 12:52:14.141222    3076 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19195-1270/.minikube/addons for local assets ...
	I0708 12:52:14.141331    3076 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19195-1270/.minikube/files for local assets ...
	I0708 12:52:14.141445    3076 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem -> 17672.pem in /etc/ssl/certs
	I0708 12:52:14.141570    3076 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0708 12:52:14.145286    3076 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem --> /etc/ssl/certs/17672.pem (1708 bytes)
	I0708 12:52:14.153563    3076 start.go:296] duration metric: took 40.95375ms for postStartSetup
	I0708 12:52:14.153993    3076 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/image-095000/config.json ...
	I0708 12:52:14.154171    3076 start.go:128] duration metric: took 19.396019s to createHost
	I0708 12:52:14.154195    3076 main.go:141] libmachine: Using SSH client type: native
	I0708 12:52:14.154284    3076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104eea920] 0x104eed180 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0708 12:52:14.154286    3076 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0708 12:52:14.199136    3076 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720468334.151513420
	
	I0708 12:52:14.199141    3076 fix.go:216] guest clock: 1720468334.151513420
	I0708 12:52:14.199144    3076 fix.go:229] Guest: 2024-07-08 12:52:14.15151342 -0700 PDT Remote: 2024-07-08 12:52:14.154172 -0700 PDT m=+19.510590709 (delta=-2.65858ms)
	I0708 12:52:14.199158    3076 fix.go:200] guest clock delta is within tolerance: -2.65858ms
	I0708 12:52:14.199160    3076 start.go:83] releasing machines lock for "image-095000", held for 19.441048292s
	I0708 12:52:14.199439    3076 ssh_runner.go:195] Run: cat /version.json
	I0708 12:52:14.199440    3076 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0708 12:52:14.199445    3076 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/image-095000/id_rsa Username:docker}
	I0708 12:52:14.199457    3076 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/image-095000/id_rsa Username:docker}
	I0708 12:52:14.223485    3076 ssh_runner.go:195] Run: systemctl --version
	I0708 12:52:14.263967    3076 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0708 12:52:14.265984    3076 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0708 12:52:14.266008    3076 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0708 12:52:14.272711    3076 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0708 12:52:14.272724    3076 start.go:494] detecting cgroup driver to use...
	I0708 12:52:14.272806    3076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 12:52:14.279448    3076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0708 12:52:14.282935    3076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0708 12:52:14.286454    3076 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0708 12:52:14.286475    3076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0708 12:52:14.290237    3076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0708 12:52:14.294110    3076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0708 12:52:14.298008    3076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0708 12:52:14.301875    3076 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0708 12:52:14.305894    3076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0708 12:52:14.309846    3076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0708 12:52:14.313829    3076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0708 12:52:14.317710    3076 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 12:52:14.321471    3076 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 12:52:14.325135    3076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 12:52:14.387538    3076 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0708 12:52:14.395630    3076 start.go:494] detecting cgroup driver to use...
	I0708 12:52:14.395700    3076 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0708 12:52:14.401931    3076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 12:52:14.409832    3076 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0708 12:52:14.416680    3076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 12:52:14.422439    3076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0708 12:52:14.428129    3076 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0708 12:52:14.470013    3076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0708 12:52:14.476966    3076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 12:52:14.483646    3076 ssh_runner.go:195] Run: which cri-dockerd
	I0708 12:52:14.485026    3076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0708 12:52:14.488678    3076 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0708 12:52:14.494624    3076 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0708 12:52:14.586440    3076 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0708 12:52:14.656423    3076 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0708 12:52:14.656480    3076 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0708 12:52:14.662528    3076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 12:52:14.745319    3076 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0708 12:52:16.929398    3076 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.184125292s)
	I0708 12:52:16.929473    3076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0708 12:52:16.936520    3076 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0708 12:52:16.943580    3076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0708 12:52:16.949221    3076 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0708 12:52:17.039254    3076 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0708 12:52:17.107415    3076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 12:52:17.174105    3076 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0708 12:52:17.181187    3076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0708 12:52:17.187347    3076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 12:52:17.280421    3076 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0708 12:52:17.306359    3076 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0708 12:52:17.306433    3076 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0708 12:52:17.308701    3076 start.go:562] Will wait 60s for crictl version
	I0708 12:52:17.308742    3076 ssh_runner.go:195] Run: which crictl
	I0708 12:52:17.310270    3076 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0708 12:52:17.329450    3076 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0708 12:52:17.329513    3076 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0708 12:52:17.340217    3076 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0708 12:52:17.351526    3076 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0708 12:52:17.351655    3076 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0708 12:52:17.353315    3076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 12:52:17.357931    3076 kubeadm.go:877] updating cluster {Name:image-095000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:image-095000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0708 12:52:17.357971    3076 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0708 12:52:17.358014    3076 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0708 12:52:17.363727    3076 docker.go:685] Got preloaded images: 
	I0708 12:52:17.363733    3076 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.2 wasn't preloaded
	I0708 12:52:17.363771    3076 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0708 12:52:17.367077    3076 ssh_runner.go:195] Run: which lz4
	I0708 12:52:17.368480    3076 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0708 12:52:17.369756    3076 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0708 12:52:17.369763    3076 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (335401736 bytes)
	I0708 12:52:18.653374    3076 docker.go:649] duration metric: took 1.284960958s to copy over tarball
	I0708 12:52:18.653423    3076 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0708 12:52:19.685256    3076 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.031849167s)
	I0708 12:52:19.685268    3076 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0708 12:52:19.701019    3076 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0708 12:52:19.705055    3076 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0708 12:52:19.711254    3076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 12:52:19.782667    3076 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0708 12:52:22.025094    3076 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.242478083s)
	I0708 12:52:22.025171    3076 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0708 12:52:22.032814    3076 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0708 12:52:22.032822    3076 cache_images.go:84] Images are preloaded, skipping loading
	I0708 12:52:22.032826    3076 kubeadm.go:928] updating node { 192.168.105.7 8443 v1.30.2 docker true true} ...
	I0708 12:52:22.032886    3076 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=image-095000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:image-095000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0708 12:52:22.032945    3076 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0708 12:52:22.040724    3076 cni.go:84] Creating CNI manager for ""
	I0708 12:52:22.040733    3076 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0708 12:52:22.040737    3076 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0708 12:52:22.040745    3076 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.7 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:image-095000 NodeName:image-095000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0708 12:52:22.040810    3076 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.7
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "image-095000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0708 12:52:22.040871    3076 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0708 12:52:22.044828    3076 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 12:52:22.044853    3076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0708 12:52:22.048409    3076 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0708 12:52:22.054076    3076 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 12:52:22.059903    3076 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0708 12:52:22.065722    3076 ssh_runner.go:195] Run: grep 192.168.105.7	control-plane.minikube.internal$ /etc/hosts
	I0708 12:52:22.067140    3076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.7	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 12:52:22.070985    3076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 12:52:22.142538    3076 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 12:52:22.151447    3076 certs.go:68] Setting up /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/image-095000 for IP: 192.168.105.7
	I0708 12:52:22.151452    3076 certs.go:194] generating shared ca certs ...
	I0708 12:52:22.151460    3076 certs.go:226] acquiring lock for ca certs: {Name:mka13b605a6983b2618b91f3a0bdec43c132a4e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:52:22.151639    3076 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.key
	I0708 12:52:22.151686    3076 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/proxy-client-ca.key
	I0708 12:52:22.151690    3076 certs.go:256] generating profile certs ...
	I0708 12:52:22.151728    3076 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/image-095000/client.key
	I0708 12:52:22.151735    3076 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/image-095000/client.crt with IP's: []
	I0708 12:52:22.388693    3076 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/image-095000/client.crt ...
	I0708 12:52:22.388706    3076 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/image-095000/client.crt: {Name:mk2f46a82808ea6c241462d22d0e5a6ef8f31fa6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:52:22.389105    3076 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/image-095000/client.key ...
	I0708 12:52:22.389107    3076 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/image-095000/client.key: {Name:mk9da9233fc6564f1f8991c39e62150154be0b99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:52:22.389262    3076 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/image-095000/apiserver.key.dc4ba3d8
	I0708 12:52:22.389270    3076 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/image-095000/apiserver.crt.dc4ba3d8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.7]
	I0708 12:52:22.614429    3076 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/image-095000/apiserver.crt.dc4ba3d8 ...
	I0708 12:52:22.614437    3076 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/image-095000/apiserver.crt.dc4ba3d8: {Name:mkcef81447e1ca30b773feee36ac2381dc13d678 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:52:22.614702    3076 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/image-095000/apiserver.key.dc4ba3d8 ...
	I0708 12:52:22.614706    3076 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/image-095000/apiserver.key.dc4ba3d8: {Name:mk321f3b71a17757f74b0932d7666c8c50a496d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:52:22.614842    3076 certs.go:381] copying /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/image-095000/apiserver.crt.dc4ba3d8 -> /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/image-095000/apiserver.crt
	I0708 12:52:22.614957    3076 certs.go:385] copying /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/image-095000/apiserver.key.dc4ba3d8 -> /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/image-095000/apiserver.key
	I0708 12:52:22.615056    3076 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/image-095000/proxy-client.key
	I0708 12:52:22.615062    3076 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/image-095000/proxy-client.crt with IP's: []
	I0708 12:52:22.725957    3076 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/image-095000/proxy-client.crt ...
	I0708 12:52:22.725959    3076 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/image-095000/proxy-client.crt: {Name:mkd9b5e81780ecea8f8f8bc557bc3cffe7bcbe51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:52:22.726118    3076 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/image-095000/proxy-client.key ...
	I0708 12:52:22.726120    3076 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/image-095000/proxy-client.key: {Name:mkae87f04aa4643bdf2b05dda4731526c02257ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:52:22.726400    3076 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/1767.pem (1338 bytes)
	W0708 12:52:22.726432    3076 certs.go:480] ignoring /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/1767_empty.pem, impossibly tiny 0 bytes
	I0708 12:52:22.726436    3076 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca-key.pem (1679 bytes)
	I0708 12:52:22.726454    3076 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem (1078 bytes)
	I0708 12:52:22.726470    3076 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem (1123 bytes)
	I0708 12:52:22.726493    3076 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/key.pem (1675 bytes)
	I0708 12:52:22.726533    3076 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem (1708 bytes)
	I0708 12:52:22.726862    3076 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 12:52:22.736339    3076 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0708 12:52:22.744934    3076 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 12:52:22.753486    3076 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 12:52:22.761914    3076 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/image-095000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0708 12:52:22.770238    3076 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/image-095000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0708 12:52:22.778453    3076 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/image-095000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 12:52:22.791231    3076 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/image-095000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0708 12:52:22.801096    3076 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem --> /usr/share/ca-certificates/17672.pem (1708 bytes)
	I0708 12:52:22.810013    3076 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 12:52:22.820641    3076 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/1767.pem --> /usr/share/ca-certificates/1767.pem (1338 bytes)
	I0708 12:52:22.828928    3076 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 12:52:22.834521    3076 ssh_runner.go:195] Run: openssl version
	I0708 12:52:22.836837    3076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1767.pem && ln -fs /usr/share/ca-certificates/1767.pem /etc/ssl/certs/1767.pem"
	I0708 12:52:22.840456    3076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1767.pem
	I0708 12:52:22.842033    3076 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  8 19:34 /usr/share/ca-certificates/1767.pem
	I0708 12:52:22.842052    3076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1767.pem
	I0708 12:52:22.844224    3076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1767.pem /etc/ssl/certs/51391683.0"
	I0708 12:52:22.847871    3076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17672.pem && ln -fs /usr/share/ca-certificates/17672.pem /etc/ssl/certs/17672.pem"
	I0708 12:52:22.851355    3076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17672.pem
	I0708 12:52:22.852887    3076 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  8 19:34 /usr/share/ca-certificates/17672.pem
	I0708 12:52:22.852904    3076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17672.pem
	I0708 12:52:22.854899    3076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17672.pem /etc/ssl/certs/3ec20f2e.0"
	I0708 12:52:22.858423    3076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 12:52:22.862330    3076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 12:52:22.863775    3076 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 12:52:22.863794    3076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 12:52:22.865670    3076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 12:52:22.869586    3076 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 12:52:22.871109    3076 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0708 12:52:22.871142    3076 kubeadm.go:391] StartCluster: {Name:image-095000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 C
lusterName:image-095000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 12:52:22.871213    3076 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0708 12:52:22.876661    3076 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0708 12:52:22.880421    3076 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 12:52:22.883777    3076 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 12:52:22.887037    3076 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 12:52:22.887040    3076 kubeadm.go:156] found existing configuration files:
	
	I0708 12:52:22.887059    3076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0708 12:52:22.890028    3076 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 12:52:22.890046    3076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 12:52:22.893549    3076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0708 12:52:22.896930    3076 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 12:52:22.896954    3076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 12:52:22.900483    3076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0708 12:52:22.903673    3076 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 12:52:22.903701    3076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 12:52:22.906835    3076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0708 12:52:22.910037    3076 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 12:52:22.910053    3076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 12:52:22.913435    3076 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0708 12:52:22.936092    3076 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0708 12:52:22.936126    3076 kubeadm.go:309] [preflight] Running pre-flight checks
	I0708 12:52:22.981644    3076 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0708 12:52:22.981698    3076 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0708 12:52:22.981742    3076 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0708 12:52:23.060883    3076 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0708 12:52:23.068666    3076 out.go:204]   - Generating certificates and keys ...
	I0708 12:52:23.068699    3076 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0708 12:52:23.068737    3076 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0708 12:52:23.249668    3076 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0708 12:52:23.350235    3076 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0708 12:52:23.587752    3076 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0708 12:52:23.684780    3076 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0708 12:52:23.773244    3076 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0708 12:52:23.773311    3076 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [image-095000 localhost] and IPs [192.168.105.7 127.0.0.1 ::1]
	I0708 12:52:23.892342    3076 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0708 12:52:23.892407    3076 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [image-095000 localhost] and IPs [192.168.105.7 127.0.0.1 ::1]
	I0708 12:52:24.037980    3076 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0708 12:52:24.144501    3076 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0708 12:52:24.201219    3076 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0708 12:52:24.201244    3076 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0708 12:52:24.312833    3076 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0708 12:52:24.544123    3076 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0708 12:52:24.631279    3076 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0708 12:52:24.868453    3076 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0708 12:52:24.941933    3076 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0708 12:52:24.942162    3076 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0708 12:52:24.943281    3076 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0708 12:52:24.948574    3076 out.go:204]   - Booting up control plane ...
	I0708 12:52:24.948620    3076 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0708 12:52:24.948652    3076 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0708 12:52:24.948686    3076 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0708 12:52:24.951656    3076 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0708 12:52:24.951957    3076 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0708 12:52:24.951978    3076 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0708 12:52:25.030441    3076 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0708 12:52:25.030480    3076 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0708 12:52:25.533800    3076 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.835917ms
	I0708 12:52:25.533971    3076 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0708 12:52:29.033789    3076 kubeadm.go:309] [api-check] The API server is healthy after 3.50060971s
	I0708 12:52:29.039215    3076 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0708 12:52:29.042560    3076 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0708 12:52:29.049290    3076 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0708 12:52:29.049383    3076 kubeadm.go:309] [mark-control-plane] Marking the node image-095000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0708 12:52:29.052172    3076 kubeadm.go:309] [bootstrap-token] Using token: tkurog.xmq4b0olo5w4bjnh
	I0708 12:52:29.064768    3076 out.go:204]   - Configuring RBAC rules ...
	I0708 12:52:29.064819    3076 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0708 12:52:29.064865    3076 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0708 12:52:29.066395    3076 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0708 12:52:29.067571    3076 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0708 12:52:29.068426    3076 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0708 12:52:29.069270    3076 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0708 12:52:29.437804    3076 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0708 12:52:29.844341    3076 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0708 12:52:30.437256    3076 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0708 12:52:30.437263    3076 kubeadm.go:309] 
	I0708 12:52:30.437288    3076 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0708 12:52:30.437290    3076 kubeadm.go:309] 
	I0708 12:52:30.437326    3076 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0708 12:52:30.437327    3076 kubeadm.go:309] 
	I0708 12:52:30.437340    3076 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0708 12:52:30.437396    3076 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0708 12:52:30.437421    3076 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0708 12:52:30.437423    3076 kubeadm.go:309] 
	I0708 12:52:30.437448    3076 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0708 12:52:30.437450    3076 kubeadm.go:309] 
	I0708 12:52:30.437487    3076 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0708 12:52:30.437490    3076 kubeadm.go:309] 
	I0708 12:52:30.437515    3076 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0708 12:52:30.437556    3076 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0708 12:52:30.437586    3076 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0708 12:52:30.437588    3076 kubeadm.go:309] 
	I0708 12:52:30.437630    3076 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0708 12:52:30.437681    3076 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0708 12:52:30.437683    3076 kubeadm.go:309] 
	I0708 12:52:30.437730    3076 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token tkurog.xmq4b0olo5w4bjnh \
	I0708 12:52:30.437779    3076 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:230a71526e00c18db9a0775e630de2fb59560bfeed9e976d05ee095d6c2f986e \
	I0708 12:52:30.437791    3076 kubeadm.go:309] 	--control-plane 
	I0708 12:52:30.437794    3076 kubeadm.go:309] 
	I0708 12:52:30.437832    3076 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0708 12:52:30.437838    3076 kubeadm.go:309] 
	I0708 12:52:30.437877    3076 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token tkurog.xmq4b0olo5w4bjnh \
	I0708 12:52:30.437926    3076 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:230a71526e00c18db9a0775e630de2fb59560bfeed9e976d05ee095d6c2f986e 
	I0708 12:52:30.437982    3076 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0708 12:52:30.437989    3076 cni.go:84] Creating CNI manager for ""
	I0708 12:52:30.437997    3076 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0708 12:52:30.442496    3076 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0708 12:52:30.450356    3076 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0708 12:52:30.453995    3076 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0708 12:52:30.459551    3076 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0708 12:52:30.459601    3076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 12:52:30.459604    3076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes image-095000 minikube.k8s.io/updated_at=2024_07_08T12_52_30_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad minikube.k8s.io/name=image-095000 minikube.k8s.io/primary=true
	I0708 12:52:30.528154    3076 ops.go:34] apiserver oom_adj: -16
	I0708 12:52:30.528171    3076 kubeadm.go:1107] duration metric: took 68.607708ms to wait for elevateKubeSystemPrivileges
	W0708 12:52:30.528185    3076 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0708 12:52:30.528188    3076 kubeadm.go:393] duration metric: took 7.657266708s to StartCluster
	I0708 12:52:30.528196    3076 settings.go:142] acquiring lock: {Name:mka0c397a57d617e1d77508d22cc3adb2edf5927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:52:30.528287    3076 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 12:52:30.528609    3076 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/kubeconfig: {Name:mkd06393ca6fb9ad91b614216d70dbd8a552e45d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:52:30.528820    3076 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0708 12:52:30.528833    3076 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 12:52:30.528876    3076 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0708 12:52:30.528923    3076 addons.go:69] Setting storage-provisioner=true in profile "image-095000"
	I0708 12:52:30.528926    3076 addons.go:69] Setting default-storageclass=true in profile "image-095000"
	I0708 12:52:30.528933    3076 addons.go:234] Setting addon storage-provisioner=true in "image-095000"
	I0708 12:52:30.528941    3076 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "image-095000"
	I0708 12:52:30.528943    3076 host.go:66] Checking if "image-095000" exists ...
	I0708 12:52:30.529068    3076 config.go:182] Loaded profile config "image-095000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:52:30.530216    3076 addons.go:234] Setting addon default-storageclass=true in "image-095000"
	I0708 12:52:30.530225    3076 host.go:66] Checking if "image-095000" exists ...
	I0708 12:52:30.530744    3076 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0708 12:52:30.532618    3076 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0708 12:52:30.532625    3076 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/image-095000/id_rsa Username:docker}
	I0708 12:52:30.532453    3076 out.go:177] * Verifying Kubernetes components...
	I0708 12:52:30.540488    3076 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 12:52:30.544492    3076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 12:52:30.547485    3076 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 12:52:30.547488    3076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0708 12:52:30.547495    3076 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/image-095000/id_rsa Username:docker}
	I0708 12:52:30.574869    3076 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0708 12:52:30.655968    3076 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 12:52:30.667170    3076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 12:52:30.667921    3076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0708 12:52:30.819773    3076 start.go:946] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0708 12:52:30.820201    3076 api_server.go:52] waiting for apiserver process to appear ...
	I0708 12:52:30.820236    3076 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 12:52:30.885007    3076 api_server.go:72] duration metric: took 356.171167ms to wait for apiserver process to appear ...
	I0708 12:52:30.885013    3076 api_server.go:88] waiting for apiserver healthz status ...
	I0708 12:52:30.885023    3076 api_server.go:253] Checking apiserver healthz at https://192.168.105.7:8443/healthz ...
	I0708 12:52:30.887597    3076 api_server.go:279] https://192.168.105.7:8443/healthz returned 200:
	ok
	I0708 12:52:30.888046    3076 api_server.go:141] control plane version: v1.30.2
	I0708 12:52:30.888051    3076 api_server.go:131] duration metric: took 3.03325ms to wait for apiserver health ...
	I0708 12:52:30.888055    3076 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 12:52:30.890625    3076 system_pods.go:59] 5 kube-system pods found
	I0708 12:52:30.890632    3076 system_pods.go:61] "etcd-image-095000" [ba6ee340-7715-4b41-b7dd-d07ee673fb5f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0708 12:52:30.890635    3076 system_pods.go:61] "kube-apiserver-image-095000" [a234c8fd-1575-45c9-b563-f1d129f5f1ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0708 12:52:30.890639    3076 system_pods.go:61] "kube-controller-manager-image-095000" [c612a7ea-d864-4938-86b1-0e062841d257] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0708 12:52:30.890641    3076 system_pods.go:61] "kube-scheduler-image-095000" [c1b40a5c-60df-473b-aca4-3258a1f2a544] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0708 12:52:30.890642    3076 system_pods.go:61] "storage-provisioner" [fffff959-81af-4310-9fbe-427b9e1927f6] Pending
	I0708 12:52:30.890644    3076 system_pods.go:74] duration metric: took 2.587333ms to wait for pod list to return data ...
	I0708 12:52:30.890648    3076 kubeadm.go:576] duration metric: took 361.816792ms to wait for: map[apiserver:true system_pods:true]
	I0708 12:52:30.890654    3076 node_conditions.go:102] verifying NodePressure condition ...
	I0708 12:52:30.891884    3076 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0708 12:52:30.891890    3076 node_conditions.go:123] node cpu capacity is 2
	I0708 12:52:30.891895    3076 node_conditions.go:105] duration metric: took 1.239709ms to run NodePressure ...
	I0708 12:52:30.891901    3076 start.go:240] waiting for startup goroutines ...
	I0708 12:52:30.892665    3076 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0708 12:52:30.901531    3076 addons.go:510] duration metric: took 372.690167ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0708 12:52:31.323339    3076 kapi.go:248] "coredns" deployment in "kube-system" namespace and "image-095000" context rescaled to 1 replicas
	I0708 12:52:31.323354    3076 start.go:245] waiting for cluster config update ...
	I0708 12:52:31.323359    3076 start.go:254] writing updated cluster config ...
	I0708 12:52:31.323617    3076 ssh_runner.go:195] Run: rm -f paused
	I0708 12:52:31.426608    3076 start.go:600] kubectl: 1.29.2, cluster: 1.30.2 (minor skew: 1)
	I0708 12:52:31.429919    3076 out.go:177] * Done! kubectl is now configured to use "image-095000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jul 08 19:52:26 image-095000 dockerd[1282]: time="2024-07-08T19:52:26.032052217Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 08 19:52:26 image-095000 dockerd[1282]: time="2024-07-08T19:52:26.032128050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 08 19:52:26 image-095000 dockerd[1282]: time="2024-07-08T19:52:26.032155467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:52:26 image-095000 dockerd[1282]: time="2024-07-08T19:52:26.032222342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:52:26 image-095000 cri-dockerd[1175]: time="2024-07-08T19:52:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/354d696b9780b5c49202989e9f5ed49f7ef9599be368d9661451e7515f8de31b/resolv.conf as [nameserver 192.168.105.1]"
	Jul 08 19:52:26 image-095000 cri-dockerd[1175]: time="2024-07-08T19:52:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8ea9aa89a169b5fa1981a97019895c44083e5cd3b4d30319e34889329b96f826/resolv.conf as [nameserver 192.168.105.1]"
	Jul 08 19:52:26 image-095000 dockerd[1282]: time="2024-07-08T19:52:26.108314467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 08 19:52:26 image-095000 dockerd[1282]: time="2024-07-08T19:52:26.108745384Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 08 19:52:26 image-095000 dockerd[1282]: time="2024-07-08T19:52:26.108764467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:52:26 image-095000 dockerd[1282]: time="2024-07-08T19:52:26.108812342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:52:26 image-095000 dockerd[1282]: time="2024-07-08T19:52:26.136303842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 08 19:52:26 image-095000 dockerd[1282]: time="2024-07-08T19:52:26.136425301Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 08 19:52:26 image-095000 dockerd[1282]: time="2024-07-08T19:52:26.136436509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:52:26 image-095000 dockerd[1282]: time="2024-07-08T19:52:26.136480176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:52:32 image-095000 dockerd[1276]: time="2024-07-08T19:52:32.608781429Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Jul 08 19:52:32 image-095000 dockerd[1276]: time="2024-07-08T19:52:32.726136095Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Jul 08 19:52:32 image-095000 dockerd[1276]: time="2024-07-08T19:52:32.743912095Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Jul 08 19:52:32 image-095000 dockerd[1282]: time="2024-07-08T19:52:32.773357054Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 08 19:52:32 image-095000 dockerd[1282]: time="2024-07-08T19:52:32.773570470Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 08 19:52:32 image-095000 dockerd[1282]: time="2024-07-08T19:52:32.773602637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:52:32 image-095000 dockerd[1282]: time="2024-07-08T19:52:32.773673179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 19:52:32 image-095000 dockerd[1276]: time="2024-07-08T19:52:32.845539095Z" level=info msg="ignoring event" container=ae7e14d84c70bc4ce92a12d9ed3dfb102c87ba73e912ccf728acefb304258e25 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 08 19:52:32 image-095000 dockerd[1282]: time="2024-07-08T19:52:32.845615054Z" level=info msg="shim disconnected" id=ae7e14d84c70bc4ce92a12d9ed3dfb102c87ba73e912ccf728acefb304258e25 namespace=moby
	Jul 08 19:52:32 image-095000 dockerd[1282]: time="2024-07-08T19:52:32.845664804Z" level=warning msg="cleaning up after shim disconnected" id=ae7e14d84c70bc4ce92a12d9ed3dfb102c87ba73e912ccf728acefb304258e25 namespace=moby
	Jul 08 19:52:32 image-095000 dockerd[1282]: time="2024-07-08T19:52:32.845669554Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	dd7149420a04a       84c601f3f72c8       7 seconds ago       Running             kube-apiserver            0                   8ea9aa89a169b       kube-apiserver-image-095000
	7e8c119dcbe5b       014faa467e297       7 seconds ago       Running             etcd                      0                   354d696b9780b       etcd-image-095000
	53abb99cfca99       c7dd04b1bafeb       8 seconds ago       Running             kube-scheduler            0                   2af4fe0320043       kube-scheduler-image-095000
	960d789050f2e       e1dcc3400d3ea       8 seconds ago       Running             kube-controller-manager   0                   75980c102c99a       kube-controller-manager-image-095000
	
	
	==> describe nodes <==
	Name:               image-095000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=image-095000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad
	                    minikube.k8s.io/name=image-095000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_08T12_52_30_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jul 2024 19:52:27 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  image-095000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jul 2024 19:52:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jul 2024 19:52:32 +0000   Mon, 08 Jul 2024 19:52:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jul 2024 19:52:32 +0000   Mon, 08 Jul 2024 19:52:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jul 2024 19:52:32 +0000   Mon, 08 Jul 2024 19:52:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jul 2024 19:52:32 +0000   Mon, 08 Jul 2024 19:52:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.7
	  Hostname:    image-095000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904748Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904748Ki
	  pods:               110
	System Info:
	  Machine ID:                 386bd572a037450fb60dcb695de61ac5
	  System UUID:                386bd572a037450fb60dcb695de61ac5
	  Boot ID:                    802f0bab-a7ec-4edf-a615-41a5ff338afd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-image-095000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4s
	  kube-system                 kube-apiserver-image-095000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 kube-controller-manager-image-095000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	  kube-system                 kube-scheduler-image-095000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (2%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age              From     Message
	  ----    ------                   ----             ----     -------
	  Normal  Starting                 8s               kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)  kubelet  Node image-095000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)  kubelet  Node image-095000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)  kubelet  Node image-095000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s               kubelet  Updated Node Allocatable limit across pods
	  Normal  Starting                 4s               kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  4s               kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4s               kubelet  Node image-095000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4s               kubelet  Node image-095000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4s               kubelet  Node image-095000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                1s               kubelet  Node image-095000 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul 8 19:52] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.657323] EINJ: EINJ table not found.
	[  +0.556622] systemd-fstab-generator[117]: Ignoring "noauto" option for root device
	[  +0.108695] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000364] platform regulatory.0: Falling back to sysfs fallback for: regulatory.db
	[  +5.651341] systemd-fstab-generator[501]: Ignoring "noauto" option for root device
	[  +0.069208] systemd-fstab-generator[513]: Ignoring "noauto" option for root device
	[  +1.439691] systemd-fstab-generator[852]: Ignoring "noauto" option for root device
	[  +0.198879] systemd-fstab-generator[888]: Ignoring "noauto" option for root device
	[  +0.068353] systemd-fstab-generator[900]: Ignoring "noauto" option for root device
	[  +0.090453] systemd-fstab-generator[914]: Ignoring "noauto" option for root device
	[  +2.145866] kauditd_printk_skb: 158 callbacks suppressed
	[  +0.149271] systemd-fstab-generator[1128]: Ignoring "noauto" option for root device
	[  +0.067554] systemd-fstab-generator[1140]: Ignoring "noauto" option for root device
	[  +0.068007] systemd-fstab-generator[1152]: Ignoring "noauto" option for root device
	[  +0.104440] systemd-fstab-generator[1167]: Ignoring "noauto" option for root device
	[  +2.503465] systemd-fstab-generator[1268]: Ignoring "noauto" option for root device
	[  +2.203568] kauditd_printk_skb: 136 callbacks suppressed
	[  +0.153815] systemd-fstab-generator[1514]: Ignoring "noauto" option for root device
	[  +2.880208] systemd-fstab-generator[1689]: Ignoring "noauto" option for root device
	[  +4.528277] systemd-fstab-generator[2105]: Ignoring "noauto" option for root device
	[  +0.047153] kauditd_printk_skb: 122 callbacks suppressed
	[  +1.047153] systemd-fstab-generator[2175]: Ignoring "noauto" option for root device
	
	
	==> etcd [7e8c119dcbe5] <==
	{"level":"info","ts":"2024-07-08T19:52:26.235392Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.7:2380"}
	{"level":"info","ts":"2024-07-08T19:52:26.239928Z","caller":"etcdserver/server.go:744","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"16a42eb2b3219327","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-07-08T19:52:26.239974Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-08T19:52:26.239994Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-08T19:52:26.239998Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-08T19:52:26.240436Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"16a42eb2b3219327 switched to configuration voters=(1631480310059340583)"}
	{"level":"info","ts":"2024-07-08T19:52:26.24047Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"bdbec8af0872bea2","local-member-id":"16a42eb2b3219327","added-peer-id":"16a42eb2b3219327","added-peer-peer-urls":["https://192.168.105.7:2380"]}
	{"level":"info","ts":"2024-07-08T19:52:27.105006Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"16a42eb2b3219327 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-08T19:52:27.105149Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"16a42eb2b3219327 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-08T19:52:27.105195Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"16a42eb2b3219327 received MsgPreVoteResp from 16a42eb2b3219327 at term 1"}
	{"level":"info","ts":"2024-07-08T19:52:27.105222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"16a42eb2b3219327 became candidate at term 2"}
	{"level":"info","ts":"2024-07-08T19:52:27.105482Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"16a42eb2b3219327 received MsgVoteResp from 16a42eb2b3219327 at term 2"}
	{"level":"info","ts":"2024-07-08T19:52:27.105529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"16a42eb2b3219327 became leader at term 2"}
	{"level":"info","ts":"2024-07-08T19:52:27.105566Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 16a42eb2b3219327 elected leader 16a42eb2b3219327 at term 2"}
	{"level":"info","ts":"2024-07-08T19:52:27.107466Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"16a42eb2b3219327","local-member-attributes":"{Name:image-095000 ClientURLs:[https://192.168.105.7:2379]}","request-path":"/0/members/16a42eb2b3219327/attributes","cluster-id":"bdbec8af0872bea2","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-08T19:52:27.107577Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-08T19:52:27.107966Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-08T19:52:27.108144Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-08T19:52:27.108174Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-08T19:52:27.108237Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T19:52:27.108935Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bdbec8af0872bea2","local-member-id":"16a42eb2b3219327","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T19:52:27.10904Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T19:52:27.109068Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T19:52:27.111185Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.7:2379"}
	{"level":"info","ts":"2024-07-08T19:52:27.111218Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 19:52:33 up 0 min,  0 users,  load average: 1.27, 0.29, 0.10
	Linux image-095000 5.10.207 #1 SMP PREEMPT Wed Jul 3 15:00:24 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [dd7149420a04] <==
	I0708 19:52:27.713544       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0708 19:52:27.713625       1 shared_informer.go:320] Caches are synced for configmaps
	I0708 19:52:27.713696       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0708 19:52:27.714071       1 controller.go:615] quota admission added evaluator for: namespaces
	I0708 19:52:27.714656       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0708 19:52:27.714687       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0708 19:52:27.729712       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0708 19:52:27.729776       1 aggregator.go:165] initial CRD sync complete...
	I0708 19:52:27.729784       1 autoregister_controller.go:141] Starting autoregister controller
	I0708 19:52:27.729788       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0708 19:52:27.729791       1 cache.go:39] Caches are synced for autoregister controller
	I0708 19:52:27.898212       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0708 19:52:28.616880       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0708 19:52:28.619672       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0708 19:52:28.619685       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0708 19:52:28.755701       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0708 19:52:28.765771       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0708 19:52:28.834625       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0708 19:52:28.836674       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.105.7]
	I0708 19:52:28.837037       1 controller.go:615] quota admission added evaluator for: endpoints
	I0708 19:52:28.838138       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0708 19:52:29.639863       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0708 19:52:29.792833       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0708 19:52:29.796249       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0708 19:52:29.799714       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [960d789050f2] <==
	E0708 19:52:31.988149       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0708 19:52:31.988164       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0708 19:52:32.139822       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0708 19:52:32.139861       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0708 19:52:32.139868       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0708 19:52:32.289420       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0708 19:52:32.289451       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0708 19:52:32.289457       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0708 19:52:32.338220       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0708 19:52:32.338249       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0708 19:52:32.338259       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0708 19:52:32.489546       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0708 19:52:32.489562       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0708 19:52:32.489572       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0708 19:52:32.489591       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0708 19:52:32.489596       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0708 19:52:32.639926       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0708 19:52:32.639966       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0708 19:52:32.639975       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0708 19:52:32.790437       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0708 19:52:32.790482       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0708 19:52:32.790489       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0708 19:52:32.939928       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0708 19:52:32.939988       1 stateful_set.go:160] "Starting stateful set controller" logger="statefulset-controller"
	I0708 19:52:32.939998       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	
	
	==> kube-scheduler [53abb99cfca9] <==
	W0708 19:52:27.671939       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0708 19:52:27.671944       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0708 19:52:27.671960       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0708 19:52:27.671964       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0708 19:52:27.671989       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0708 19:52:27.671997       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0708 19:52:27.672002       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0708 19:52:27.672005       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0708 19:52:27.672020       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0708 19:52:27.672024       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0708 19:52:27.671801       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0708 19:52:27.672029       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0708 19:52:27.672099       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0708 19:52:27.672133       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0708 19:52:27.672150       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0708 19:52:27.672155       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0708 19:52:28.513382       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0708 19:52:28.513407       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0708 19:52:28.599352       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0708 19:52:28.599370       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0708 19:52:28.624143       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0708 19:52:28.624191       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0708 19:52:28.632806       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0708 19:52:28.632839       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0708 19:52:29.070421       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 08 19:52:29 image-095000 kubelet[2112]: I0708 19:52:29.737253    2112 topology_manager.go:215] "Topology Admit Handler" podUID="a62d7ce28d8318dee57a07ad22174d45" podNamespace="kube-system" podName="kube-scheduler-image-095000"
	Jul 08 19:52:29 image-095000 kubelet[2112]: I0708 19:52:29.737448    2112 topology_manager.go:215] "Topology Admit Handler" podUID="bcd446df052bf937ebe3715dded5146a" podNamespace="kube-system" podName="etcd-image-095000"
	Jul 08 19:52:29 image-095000 kubelet[2112]: I0708 19:52:29.737483    2112 topology_manager.go:215] "Topology Admit Handler" podUID="0c8391fa433f337be78eeab138ebe4e8" podNamespace="kube-system" podName="kube-apiserver-image-095000"
	Jul 08 19:52:29 image-095000 kubelet[2112]: I0708 19:52:29.737497    2112 topology_manager.go:215] "Topology Admit Handler" podUID="3fd313b31d1cdd2de01fcba851c698e1" podNamespace="kube-system" podName="kube-controller-manager-image-095000"
	Jul 08 19:52:29 image-095000 kubelet[2112]: E0708 19:52:29.742089    2112 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-image-095000\" already exists" pod="kube-system/kube-scheduler-image-095000"
	Jul 08 19:52:29 image-095000 kubelet[2112]: E0708 19:52:29.742305    2112 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-image-095000\" already exists" pod="kube-system/kube-apiserver-image-095000"
	Jul 08 19:52:29 image-095000 kubelet[2112]: I0708 19:52:29.930455    2112 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3fd313b31d1cdd2de01fcba851c698e1-k8s-certs\") pod \"kube-controller-manager-image-095000\" (UID: \"3fd313b31d1cdd2de01fcba851c698e1\") " pod="kube-system/kube-controller-manager-image-095000"
	Jul 08 19:52:29 image-095000 kubelet[2112]: I0708 19:52:29.930475    2112 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a62d7ce28d8318dee57a07ad22174d45-kubeconfig\") pod \"kube-scheduler-image-095000\" (UID: \"a62d7ce28d8318dee57a07ad22174d45\") " pod="kube-system/kube-scheduler-image-095000"
	Jul 08 19:52:29 image-095000 kubelet[2112]: I0708 19:52:29.930486    2112 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0c8391fa433f337be78eeab138ebe4e8-ca-certs\") pod \"kube-apiserver-image-095000\" (UID: \"0c8391fa433f337be78eeab138ebe4e8\") " pod="kube-system/kube-apiserver-image-095000"
	Jul 08 19:52:29 image-095000 kubelet[2112]: I0708 19:52:29.930494    2112 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0c8391fa433f337be78eeab138ebe4e8-usr-share-ca-certificates\") pod \"kube-apiserver-image-095000\" (UID: \"0c8391fa433f337be78eeab138ebe4e8\") " pod="kube-system/kube-apiserver-image-095000"
	Jul 08 19:52:29 image-095000 kubelet[2112]: I0708 19:52:29.930503    2112 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3fd313b31d1cdd2de01fcba851c698e1-ca-certs\") pod \"kube-controller-manager-image-095000\" (UID: \"3fd313b31d1cdd2de01fcba851c698e1\") " pod="kube-system/kube-controller-manager-image-095000"
	Jul 08 19:52:29 image-095000 kubelet[2112]: I0708 19:52:29.930512    2112 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3fd313b31d1cdd2de01fcba851c698e1-flexvolume-dir\") pod \"kube-controller-manager-image-095000\" (UID: \"3fd313b31d1cdd2de01fcba851c698e1\") " pod="kube-system/kube-controller-manager-image-095000"
	Jul 08 19:52:29 image-095000 kubelet[2112]: I0708 19:52:29.930519    2112 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3fd313b31d1cdd2de01fcba851c698e1-kubeconfig\") pod \"kube-controller-manager-image-095000\" (UID: \"3fd313b31d1cdd2de01fcba851c698e1\") " pod="kube-system/kube-controller-manager-image-095000"
	Jul 08 19:52:29 image-095000 kubelet[2112]: I0708 19:52:29.930526    2112 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3fd313b31d1cdd2de01fcba851c698e1-usr-share-ca-certificates\") pod \"kube-controller-manager-image-095000\" (UID: \"3fd313b31d1cdd2de01fcba851c698e1\") " pod="kube-system/kube-controller-manager-image-095000"
	Jul 08 19:52:29 image-095000 kubelet[2112]: I0708 19:52:29.930535    2112 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/bcd446df052bf937ebe3715dded5146a-etcd-certs\") pod \"etcd-image-095000\" (UID: \"bcd446df052bf937ebe3715dded5146a\") " pod="kube-system/etcd-image-095000"
	Jul 08 19:52:29 image-095000 kubelet[2112]: I0708 19:52:29.930545    2112 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/bcd446df052bf937ebe3715dded5146a-etcd-data\") pod \"etcd-image-095000\" (UID: \"bcd446df052bf937ebe3715dded5146a\") " pod="kube-system/etcd-image-095000"
	Jul 08 19:52:29 image-095000 kubelet[2112]: I0708 19:52:29.930552    2112 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0c8391fa433f337be78eeab138ebe4e8-k8s-certs\") pod \"kube-apiserver-image-095000\" (UID: \"0c8391fa433f337be78eeab138ebe4e8\") " pod="kube-system/kube-apiserver-image-095000"
	Jul 08 19:52:30 image-095000 kubelet[2112]: I0708 19:52:30.621238    2112 apiserver.go:52] "Watching apiserver"
	Jul 08 19:52:30 image-095000 kubelet[2112]: I0708 19:52:30.626947    2112 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 08 19:52:30 image-095000 kubelet[2112]: E0708 19:52:30.701837    2112 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-image-095000\" already exists" pod="kube-system/kube-apiserver-image-095000"
	Jul 08 19:52:30 image-095000 kubelet[2112]: I0708 19:52:30.714347    2112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-image-095000" podStartSLOduration=1.714327803 podStartE2EDuration="1.714327803s" podCreationTimestamp="2024-07-08 19:52:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-08 19:52:30.714181219 +0000 UTC m=+1.118612501" watchObservedRunningTime="2024-07-08 19:52:30.714327803 +0000 UTC m=+1.118759084"
	Jul 08 19:52:30 image-095000 kubelet[2112]: I0708 19:52:30.735112    2112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-image-095000" podStartSLOduration=2.735101469 podStartE2EDuration="2.735101469s" podCreationTimestamp="2024-07-08 19:52:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-08 19:52:30.720546803 +0000 UTC m=+1.124978084" watchObservedRunningTime="2024-07-08 19:52:30.735101469 +0000 UTC m=+1.139532751"
	Jul 08 19:52:30 image-095000 kubelet[2112]: I0708 19:52:30.739144    2112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-image-095000" podStartSLOduration=1.7391350939999999 podStartE2EDuration="1.739135094s" podCreationTimestamp="2024-07-08 19:52:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-08 19:52:30.739104053 +0000 UTC m=+1.143535334" watchObservedRunningTime="2024-07-08 19:52:30.739135094 +0000 UTC m=+1.143566376"
	Jul 08 19:52:30 image-095000 kubelet[2112]: I0708 19:52:30.739258    2112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-image-095000" podStartSLOduration=1.739255719 podStartE2EDuration="1.739255719s" podCreationTimestamp="2024-07-08 19:52:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-08 19:52:30.735322219 +0000 UTC m=+1.139753459" watchObservedRunningTime="2024-07-08 19:52:30.739255719 +0000 UTC m=+1.143687001"
	Jul 08 19:52:32 image-095000 kubelet[2112]: I0708 19:52:32.362591    2112 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p image-095000 -n image-095000
helpers_test.go:261: (dbg) Run:  kubectl --context image-095000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestImageBuild/serial/BuildWithBuildArg]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context image-095000 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context image-095000 describe pod storage-provisioner: exit status 1 (42.729ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context image-095000 describe pod storage-provisioner: exit status 1
--- FAIL: TestImageBuild/serial/BuildWithBuildArg (0.96s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.31s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-389000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-389000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.236294375s)

                                                
                                                
-- stdout --
	* [mount-start-1-389000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-389000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-389000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-389000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-389000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-389000 -n mount-start-1-389000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-389000 -n mount-start-1-389000: exit status 7 (69.21425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-389000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-969000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-969000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.903584833s)

                                                
                                                
-- stdout --
	* [multinode-969000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-969000" primary control-plane node in "multinode-969000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-969000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 12:55:33.877370    3258 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:55:33.877506    3258 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:55:33.877510    3258 out.go:304] Setting ErrFile to fd 2...
	I0708 12:55:33.877513    3258 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:55:33.877659    3258 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:55:33.878750    3258 out.go:298] Setting JSON to false
	I0708 12:55:33.894826    3258 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3301,"bootTime":1720465232,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0708 12:55:33.894888    3258 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0708 12:55:33.901043    3258 out.go:177] * [multinode-969000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0708 12:55:33.909069    3258 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 12:55:33.909110    3258 notify.go:220] Checking for updates...
	I0708 12:55:33.917007    3258 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 12:55:33.920021    3258 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0708 12:55:33.923017    3258 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 12:55:33.927019    3258 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	I0708 12:55:33.929980    3258 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 12:55:33.933123    3258 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 12:55:33.936954    3258 out.go:177] * Using the qemu2 driver based on user configuration
	I0708 12:55:33.944005    3258 start.go:297] selected driver: qemu2
	I0708 12:55:33.944012    3258 start.go:901] validating driver "qemu2" against <nil>
	I0708 12:55:33.944027    3258 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 12:55:33.946374    3258 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0708 12:55:33.950045    3258 out.go:177] * Automatically selected the socket_vmnet network
	I0708 12:55:33.951466    3258 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 12:55:33.951483    3258 cni.go:84] Creating CNI manager for ""
	I0708 12:55:33.951486    3258 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0708 12:55:33.951490    3258 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0708 12:55:33.951516    3258 start.go:340] cluster config:
	{Name:multinode-969000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-969000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 12:55:33.955312    3258 iso.go:125] acquiring lock: {Name:mk0270d312faa6a295feea241390baaf586d8510 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 12:55:33.963040    3258 out.go:177] * Starting "multinode-969000" primary control-plane node in "multinode-969000" cluster
	I0708 12:55:33.967014    3258 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0708 12:55:33.967029    3258 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0708 12:55:33.967038    3258 cache.go:56] Caching tarball of preloaded images
	I0708 12:55:33.967101    3258 preload.go:173] Found /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0708 12:55:33.967107    3258 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0708 12:55:33.967313    3258 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/multinode-969000/config.json ...
	I0708 12:55:33.967325    3258 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/multinode-969000/config.json: {Name:mkc6742eb9c268cab18f08ee5d75cd04238d2e9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:55:33.967544    3258 start.go:360] acquireMachinesLock for multinode-969000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 12:55:33.967577    3258 start.go:364] duration metric: took 28.167µs to acquireMachinesLock for "multinode-969000"
	I0708 12:55:33.967592    3258 start.go:93] Provisioning new machine with config: &{Name:multinode-969000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.2 ClusterName:multinode-969000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 12:55:33.967617    3258 start.go:125] createHost starting for "" (driver="qemu2")
	I0708 12:55:33.975967    3258 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0708 12:55:33.993988    3258 start.go:159] libmachine.API.Create for "multinode-969000" (driver="qemu2")
	I0708 12:55:33.994045    3258 client.go:168] LocalClient.Create starting
	I0708 12:55:33.994136    3258 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem
	I0708 12:55:33.994176    3258 main.go:141] libmachine: Decoding PEM data...
	I0708 12:55:33.994186    3258 main.go:141] libmachine: Parsing certificate...
	I0708 12:55:33.994228    3258 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem
	I0708 12:55:33.994252    3258 main.go:141] libmachine: Decoding PEM data...
	I0708 12:55:33.994261    3258 main.go:141] libmachine: Parsing certificate...
	I0708 12:55:33.994708    3258 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19195-1270/.minikube/cache/iso/arm64/minikube-v1.33.1-1720011972-19186-arm64.iso...
	I0708 12:55:34.140182    3258 main.go:141] libmachine: Creating SSH key...
	I0708 12:55:34.322803    3258 main.go:141] libmachine: Creating Disk image...
	I0708 12:55:34.322810    3258 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0708 12:55:34.323004    3258 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/multinode-969000/disk.qcow2.raw /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/multinode-969000/disk.qcow2
	I0708 12:55:34.332468    3258 main.go:141] libmachine: STDOUT: 
	I0708 12:55:34.332488    3258 main.go:141] libmachine: STDERR: 
	I0708 12:55:34.332552    3258 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/multinode-969000/disk.qcow2 +20000M
	I0708 12:55:34.340403    3258 main.go:141] libmachine: STDOUT: Image resized.
	
	I0708 12:55:34.340416    3258 main.go:141] libmachine: STDERR: 
	I0708 12:55:34.340432    3258 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/multinode-969000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/multinode-969000/disk.qcow2
	I0708 12:55:34.340436    3258 main.go:141] libmachine: Starting QEMU VM...
	I0708 12:55:34.340464    3258 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/multinode-969000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/multinode-969000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/multinode-969000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:23:aa:8c:e8:85 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/multinode-969000/disk.qcow2
	I0708 12:55:34.342066    3258 main.go:141] libmachine: STDOUT: 
	I0708 12:55:34.342079    3258 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 12:55:34.342096    3258 client.go:171] duration metric: took 348.04875ms to LocalClient.Create
	I0708 12:55:36.344204    3258 start.go:128] duration metric: took 2.376637041s to createHost
	I0708 12:55:36.344248    3258 start.go:83] releasing machines lock for "multinode-969000", held for 2.376730375s
	W0708 12:55:36.344320    3258 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 12:55:36.355445    3258 out.go:177] * Deleting "multinode-969000" in qemu2 ...
	W0708 12:55:36.385868    3258 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 12:55:36.385901    3258 start.go:728] Will try again in 5 seconds ...
	I0708 12:55:41.387944    3258 start.go:360] acquireMachinesLock for multinode-969000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 12:55:41.388343    3258 start.go:364] duration metric: took 324.708µs to acquireMachinesLock for "multinode-969000"
	I0708 12:55:41.388457    3258 start.go:93] Provisioning new machine with config: &{Name:multinode-969000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.2 ClusterName:multinode-969000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 12:55:41.388797    3258 start.go:125] createHost starting for "" (driver="qemu2")
	I0708 12:55:41.404544    3258 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0708 12:55:41.453599    3258 start.go:159] libmachine.API.Create for "multinode-969000" (driver="qemu2")
	I0708 12:55:41.453665    3258 client.go:168] LocalClient.Create starting
	I0708 12:55:41.453792    3258 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem
	I0708 12:55:41.453852    3258 main.go:141] libmachine: Decoding PEM data...
	I0708 12:55:41.453870    3258 main.go:141] libmachine: Parsing certificate...
	I0708 12:55:41.453940    3258 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem
	I0708 12:55:41.453984    3258 main.go:141] libmachine: Decoding PEM data...
	I0708 12:55:41.453995    3258 main.go:141] libmachine: Parsing certificate...
	I0708 12:55:41.454496    3258 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19195-1270/.minikube/cache/iso/arm64/minikube-v1.33.1-1720011972-19186-arm64.iso...
	I0708 12:55:41.615546    3258 main.go:141] libmachine: Creating SSH key...
	I0708 12:55:41.687986    3258 main.go:141] libmachine: Creating Disk image...
	I0708 12:55:41.687991    3258 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0708 12:55:41.688185    3258 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/multinode-969000/disk.qcow2.raw /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/multinode-969000/disk.qcow2
	I0708 12:55:41.697338    3258 main.go:141] libmachine: STDOUT: 
	I0708 12:55:41.697354    3258 main.go:141] libmachine: STDERR: 
	I0708 12:55:41.697405    3258 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/multinode-969000/disk.qcow2 +20000M
	I0708 12:55:41.705184    3258 main.go:141] libmachine: STDOUT: Image resized.
	
	I0708 12:55:41.705200    3258 main.go:141] libmachine: STDERR: 
	I0708 12:55:41.705210    3258 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/multinode-969000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/multinode-969000/disk.qcow2
	I0708 12:55:41.705216    3258 main.go:141] libmachine: Starting QEMU VM...
	I0708 12:55:41.705252    3258 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/multinode-969000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/multinode-969000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/multinode-969000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:d3:85:5a:18:86 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/multinode-969000/disk.qcow2
	I0708 12:55:41.706937    3258 main.go:141] libmachine: STDOUT: 
	I0708 12:55:41.706952    3258 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 12:55:41.706965    3258 client.go:171] duration metric: took 253.301958ms to LocalClient.Create
	I0708 12:55:43.709082    3258 start.go:128] duration metric: took 2.3203235s to createHost
	I0708 12:55:43.709132    3258 start.go:83] releasing machines lock for "multinode-969000", held for 2.320827208s
	W0708 12:55:43.709552    3258 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-969000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-969000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 12:55:43.720059    3258 out.go:177] 
	W0708 12:55:43.727168    3258 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0708 12:55:43.727192    3258 out.go:239] * 
	* 
	W0708 12:55:43.729741    3258 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 12:55:43.739077    3258 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-969000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000: exit status 7 (64.854042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.97s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (111.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-969000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-969000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (130.672333ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-969000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-969000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-969000 -- rollout status deployment/busybox: exit status 1 (57.061042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-969000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (55.687792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-969000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.8235ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-969000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.673417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-969000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.437833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-969000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
E0708 12:55:52.026326    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/addons-443000/client.crt: no such file or directory
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (76.415333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-969000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.470625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-969000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.711792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-969000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.058375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-969000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.490708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-969000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.527666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-969000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
E0708 12:57:16.024672    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/functional-183000/client.crt: no such file or directory
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.6475ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-969000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.491667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-969000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-969000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-969000 -- exec  -- nslookup kubernetes.io: exit status 1 (55.760875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-969000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-969000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-969000 -- exec  -- nslookup kubernetes.default: exit status 1 (54.768292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-969000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-969000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-969000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.0495ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-969000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000: exit status 7 (29.840958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (111.79s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.842417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-969000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000: exit status 7 (29.81025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-969000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-969000 -v 3 --alsologtostderr: exit status 83 (39.805209ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-969000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-969000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 12:57:35.719226    3350 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:57:35.719539    3350 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:57:35.719543    3350 out.go:304] Setting ErrFile to fd 2...
	I0708 12:57:35.719545    3350 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:57:35.719711    3350 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:57:35.719964    3350 mustload.go:65] Loading cluster: multinode-969000
	I0708 12:57:35.720148    3350 config.go:182] Loaded profile config "multinode-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:57:35.723986    3350 out.go:177] * The control-plane node multinode-969000 host is not running: state=Stopped
	I0708 12:57:35.727675    3350 out.go:177]   To start a cluster, run: "minikube start -p multinode-969000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-969000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000: exit status 7 (29.650042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-969000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-969000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (32.134958ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-969000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-969000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-969000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000: exit status 7 (30.133334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-969000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-969000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-969000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"multinode-969000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000: exit status 7 (28.83325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-969000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-969000 status --output json --alsologtostderr: exit status 7 (29.98875ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-969000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 12:57:35.925145    3362 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:57:35.925279    3362 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:57:35.925282    3362 out.go:304] Setting ErrFile to fd 2...
	I0708 12:57:35.925285    3362 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:57:35.925402    3362 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:57:35.925518    3362 out.go:298] Setting JSON to true
	I0708 12:57:35.925528    3362 mustload.go:65] Loading cluster: multinode-969000
	I0708 12:57:35.925779    3362 notify.go:220] Checking for updates...
	I0708 12:57:35.926376    3362 config.go:182] Loaded profile config "multinode-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:57:35.926389    3362 status.go:255] checking status of multinode-969000 ...
	I0708 12:57:35.926714    3362 status.go:330] multinode-969000 host status = "Stopped" (err=<nil>)
	I0708 12:57:35.926719    3362 status.go:343] host is not running, skipping remaining checks
	I0708 12:57:35.926721    3362 status.go:257] multinode-969000 status: &{Name:multinode-969000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-969000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000: exit status 7 (29.833334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-969000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-969000 node stop m03: exit status 85 (47.565ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-969000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-969000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-969000 status: exit status 7 (29.94ms)

                                                
                                                
-- stdout --
	multinode-969000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-969000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-969000 status --alsologtostderr: exit status 7 (29.879541ms)

                                                
                                                
-- stdout --
	multinode-969000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 12:57:36.063932    3370 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:57:36.064098    3370 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:57:36.064101    3370 out.go:304] Setting ErrFile to fd 2...
	I0708 12:57:36.064104    3370 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:57:36.064228    3370 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:57:36.064363    3370 out.go:298] Setting JSON to false
	I0708 12:57:36.064374    3370 mustload.go:65] Loading cluster: multinode-969000
	I0708 12:57:36.064441    3370 notify.go:220] Checking for updates...
	I0708 12:57:36.064573    3370 config.go:182] Loaded profile config "multinode-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:57:36.064579    3370 status.go:255] checking status of multinode-969000 ...
	I0708 12:57:36.064819    3370 status.go:330] multinode-969000 host status = "Stopped" (err=<nil>)
	I0708 12:57:36.064822    3370 status.go:343] host is not running, skipping remaining checks
	I0708 12:57:36.064824    3370 status.go:257] multinode-969000 status: &{Name:multinode-969000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-969000 status --alsologtostderr": multinode-969000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000: exit status 7 (29.566708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (51.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-969000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-969000 node start m03 -v=7 --alsologtostderr: exit status 85 (44.035584ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 12:57:36.122871    3374 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:57:36.123116    3374 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:57:36.123119    3374 out.go:304] Setting ErrFile to fd 2...
	I0708 12:57:36.123122    3374 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:57:36.123278    3374 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:57:36.123530    3374 mustload.go:65] Loading cluster: multinode-969000
	I0708 12:57:36.123754    3374 config.go:182] Loaded profile config "multinode-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:57:36.128832    3374 out.go:177] 
	W0708 12:57:36.130260    3374 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0708 12:57:36.130265    3374 out.go:239] * 
	* 
	W0708 12:57:36.131948    3374 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 12:57:36.134709    3374 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0708 12:57:36.122871    3374 out.go:291] Setting OutFile to fd 1 ...
I0708 12:57:36.123116    3374 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0708 12:57:36.123119    3374 out.go:304] Setting ErrFile to fd 2...
I0708 12:57:36.123122    3374 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0708 12:57:36.123278    3374 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
I0708 12:57:36.123530    3374 mustload.go:65] Loading cluster: multinode-969000
I0708 12:57:36.123754    3374 config.go:182] Loaded profile config "multinode-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0708 12:57:36.128832    3374 out.go:177] 
W0708 12:57:36.130260    3374 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0708 12:57:36.130265    3374 out.go:239] * 
* 
W0708 12:57:36.131948    3374 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0708 12:57:36.134709    3374 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-969000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-969000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-969000 status -v=7 --alsologtostderr: exit status 7 (30.037958ms)

                                                
                                                
-- stdout --
	multinode-969000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 12:57:36.168120    3376 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:57:36.168284    3376 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:57:36.168287    3376 out.go:304] Setting ErrFile to fd 2...
	I0708 12:57:36.168290    3376 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:57:36.168403    3376 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:57:36.168522    3376 out.go:298] Setting JSON to false
	I0708 12:57:36.168532    3376 mustload.go:65] Loading cluster: multinode-969000
	I0708 12:57:36.168595    3376 notify.go:220] Checking for updates...
	I0708 12:57:36.168722    3376 config.go:182] Loaded profile config "multinode-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:57:36.168728    3376 status.go:255] checking status of multinode-969000 ...
	I0708 12:57:36.168931    3376 status.go:330] multinode-969000 host status = "Stopped" (err=<nil>)
	I0708 12:57:36.168935    3376 status.go:343] host is not running, skipping remaining checks
	I0708 12:57:36.168937    3376 status.go:257] multinode-969000 status: &{Name:multinode-969000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-969000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-969000 status -v=7 --alsologtostderr: exit status 7 (72.423708ms)

                                                
                                                
-- stdout --
	multinode-969000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 12:57:37.246331    3378 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:57:37.246541    3378 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:57:37.246546    3378 out.go:304] Setting ErrFile to fd 2...
	I0708 12:57:37.246549    3378 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:57:37.246701    3378 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:57:37.246840    3378 out.go:298] Setting JSON to false
	I0708 12:57:37.246853    3378 mustload.go:65] Loading cluster: multinode-969000
	I0708 12:57:37.246888    3378 notify.go:220] Checking for updates...
	I0708 12:57:37.247109    3378 config.go:182] Loaded profile config "multinode-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:57:37.247116    3378 status.go:255] checking status of multinode-969000 ...
	I0708 12:57:37.247393    3378 status.go:330] multinode-969000 host status = "Stopped" (err=<nil>)
	I0708 12:57:37.247398    3378 status.go:343] host is not running, skipping remaining checks
	I0708 12:57:37.247401    3378 status.go:257] multinode-969000 status: &{Name:multinode-969000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-969000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-969000 status -v=7 --alsologtostderr: exit status 7 (70.997166ms)

                                                
                                                
-- stdout --
	multinode-969000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 12:57:39.254209    3380 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:57:39.254414    3380 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:57:39.254418    3380 out.go:304] Setting ErrFile to fd 2...
	I0708 12:57:39.254421    3380 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:57:39.254617    3380 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:57:39.254758    3380 out.go:298] Setting JSON to false
	I0708 12:57:39.254773    3380 mustload.go:65] Loading cluster: multinode-969000
	I0708 12:57:39.254811    3380 notify.go:220] Checking for updates...
	I0708 12:57:39.255027    3380 config.go:182] Loaded profile config "multinode-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:57:39.255035    3380 status.go:255] checking status of multinode-969000 ...
	I0708 12:57:39.255304    3380 status.go:330] multinode-969000 host status = "Stopped" (err=<nil>)
	I0708 12:57:39.255309    3380 status.go:343] host is not running, skipping remaining checks
	I0708 12:57:39.255312    3380 status.go:257] multinode-969000 status: &{Name:multinode-969000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-969000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-969000 status -v=7 --alsologtostderr: exit status 7 (74.276125ms)

                                                
                                                
-- stdout --
	multinode-969000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 12:57:42.123227    3388 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:57:42.123456    3388 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:57:42.123460    3388 out.go:304] Setting ErrFile to fd 2...
	I0708 12:57:42.123464    3388 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:57:42.123680    3388 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:57:42.123863    3388 out.go:298] Setting JSON to false
	I0708 12:57:42.123878    3388 mustload.go:65] Loading cluster: multinode-969000
	I0708 12:57:42.123918    3388 notify.go:220] Checking for updates...
	I0708 12:57:42.124168    3388 config.go:182] Loaded profile config "multinode-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:57:42.124176    3388 status.go:255] checking status of multinode-969000 ...
	I0708 12:57:42.124483    3388 status.go:330] multinode-969000 host status = "Stopped" (err=<nil>)
	I0708 12:57:42.124488    3388 status.go:343] host is not running, skipping remaining checks
	I0708 12:57:42.124490    3388 status.go:257] multinode-969000 status: &{Name:multinode-969000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-969000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-969000 status -v=7 --alsologtostderr: exit status 7 (73.409416ms)

                                                
                                                
-- stdout --
	multinode-969000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 12:57:44.185778    3390 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:57:44.185991    3390 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:57:44.185995    3390 out.go:304] Setting ErrFile to fd 2...
	I0708 12:57:44.185999    3390 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:57:44.186174    3390 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:57:44.186322    3390 out.go:298] Setting JSON to false
	I0708 12:57:44.186335    3390 mustload.go:65] Loading cluster: multinode-969000
	I0708 12:57:44.186367    3390 notify.go:220] Checking for updates...
	I0708 12:57:44.186641    3390 config.go:182] Loaded profile config "multinode-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:57:44.186651    3390 status.go:255] checking status of multinode-969000 ...
	I0708 12:57:44.186937    3390 status.go:330] multinode-969000 host status = "Stopped" (err=<nil>)
	I0708 12:57:44.186943    3390 status.go:343] host is not running, skipping remaining checks
	I0708 12:57:44.186946    3390 status.go:257] multinode-969000 status: &{Name:multinode-969000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-969000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-969000 status -v=7 --alsologtostderr: exit status 7 (72.764667ms)

                                                
                                                
-- stdout --
	multinode-969000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 12:57:47.975898    3392 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:57:47.976080    3392 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:57:47.976085    3392 out.go:304] Setting ErrFile to fd 2...
	I0708 12:57:47.976087    3392 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:57:47.976266    3392 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:57:47.976412    3392 out.go:298] Setting JSON to false
	I0708 12:57:47.976425    3392 mustload.go:65] Loading cluster: multinode-969000
	I0708 12:57:47.976469    3392 notify.go:220] Checking for updates...
	I0708 12:57:47.976693    3392 config.go:182] Loaded profile config "multinode-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:57:47.976701    3392 status.go:255] checking status of multinode-969000 ...
	I0708 12:57:47.976979    3392 status.go:330] multinode-969000 host status = "Stopped" (err=<nil>)
	I0708 12:57:47.976984    3392 status.go:343] host is not running, skipping remaining checks
	I0708 12:57:47.976987    3392 status.go:257] multinode-969000 status: &{Name:multinode-969000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-969000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-969000 status -v=7 --alsologtostderr: exit status 7 (70.741583ms)

                                                
                                                
-- stdout --
	multinode-969000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 12:57:54.721820    3394 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:57:54.722029    3394 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:57:54.722034    3394 out.go:304] Setting ErrFile to fd 2...
	I0708 12:57:54.722037    3394 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:57:54.722221    3394 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:57:54.722369    3394 out.go:298] Setting JSON to false
	I0708 12:57:54.722385    3394 mustload.go:65] Loading cluster: multinode-969000
	I0708 12:57:54.722426    3394 notify.go:220] Checking for updates...
	I0708 12:57:54.722656    3394 config.go:182] Loaded profile config "multinode-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:57:54.722663    3394 status.go:255] checking status of multinode-969000 ...
	I0708 12:57:54.722965    3394 status.go:330] multinode-969000 host status = "Stopped" (err=<nil>)
	I0708 12:57:54.722970    3394 status.go:343] host is not running, skipping remaining checks
	I0708 12:57:54.722973    3394 status.go:257] multinode-969000 status: &{Name:multinode-969000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-969000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-969000 status -v=7 --alsologtostderr: exit status 7 (73.265416ms)

                                                
                                                
-- stdout --
	multinode-969000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 12:58:06.376582    3398 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:58:06.376793    3398 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:58:06.376797    3398 out.go:304] Setting ErrFile to fd 2...
	I0708 12:58:06.376801    3398 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:58:06.376989    3398 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:58:06.377143    3398 out.go:298] Setting JSON to false
	I0708 12:58:06.377156    3398 mustload.go:65] Loading cluster: multinode-969000
	I0708 12:58:06.377190    3398 notify.go:220] Checking for updates...
	I0708 12:58:06.377418    3398 config.go:182] Loaded profile config "multinode-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:58:06.377425    3398 status.go:255] checking status of multinode-969000 ...
	I0708 12:58:06.377687    3398 status.go:330] multinode-969000 host status = "Stopped" (err=<nil>)
	I0708 12:58:06.377692    3398 status.go:343] host is not running, skipping remaining checks
	I0708 12:58:06.377695    3398 status.go:257] multinode-969000 status: &{Name:multinode-969000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-969000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-969000 status -v=7 --alsologtostderr: exit status 7 (71.910209ms)

                                                
                                                
-- stdout --
	multinode-969000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 12:58:27.995613    3400 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:58:27.995834    3400 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:58:27.995839    3400 out.go:304] Setting ErrFile to fd 2...
	I0708 12:58:27.995842    3400 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:58:27.996000    3400 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:58:27.996152    3400 out.go:298] Setting JSON to false
	I0708 12:58:27.996165    3400 mustload.go:65] Loading cluster: multinode-969000
	I0708 12:58:27.996206    3400 notify.go:220] Checking for updates...
	I0708 12:58:27.996416    3400 config.go:182] Loaded profile config "multinode-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:58:27.996423    3400 status.go:255] checking status of multinode-969000 ...
	I0708 12:58:27.996679    3400 status.go:330] multinode-969000 host status = "Stopped" (err=<nil>)
	I0708 12:58:27.996684    3400 status.go:343] host is not running, skipping remaining checks
	I0708 12:58:27.996687    3400 status.go:257] multinode-969000 status: &{Name:multinode-969000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-969000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000: exit status 7 (32.132041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (51.94s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-969000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-969000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-969000: (3.322846375s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-969000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-969000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.220010416s)

                                                
                                                
-- stdout --
	* [multinode-969000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-969000" primary control-plane node in "multinode-969000" cluster
	* Restarting existing qemu2 VM for "multinode-969000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-969000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 12:58:31.445714    3424 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:58:31.445886    3424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:58:31.445890    3424 out.go:304] Setting ErrFile to fd 2...
	I0708 12:58:31.445893    3424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:58:31.446049    3424 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:58:31.447234    3424 out.go:298] Setting JSON to false
	I0708 12:58:31.466186    3424 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3479,"bootTime":1720465232,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0708 12:58:31.466284    3424 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0708 12:58:31.470255    3424 out.go:177] * [multinode-969000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0708 12:58:31.478255    3424 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 12:58:31.478290    3424 notify.go:220] Checking for updates...
	I0708 12:58:31.485157    3424 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 12:58:31.488194    3424 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0708 12:58:31.491185    3424 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 12:58:31.494144    3424 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	I0708 12:58:31.497155    3424 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 12:58:31.500426    3424 config.go:182] Loaded profile config "multinode-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:58:31.500481    3424 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 12:58:31.505152    3424 out.go:177] * Using the qemu2 driver based on existing profile
	I0708 12:58:31.511153    3424 start.go:297] selected driver: qemu2
	I0708 12:58:31.511161    3424 start.go:901] validating driver "qemu2" against &{Name:multinode-969000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:multinode-969000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 12:58:31.511223    3424 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 12:58:31.513632    3424 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 12:58:31.513678    3424 cni.go:84] Creating CNI manager for ""
	I0708 12:58:31.513684    3424 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0708 12:58:31.513736    3424 start.go:340] cluster config:
	{Name:multinode-969000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-969000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 12:58:31.517474    3424 iso.go:125] acquiring lock: {Name:mk0270d312faa6a295feea241390baaf586d8510 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 12:58:31.525225    3424 out.go:177] * Starting "multinode-969000" primary control-plane node in "multinode-969000" cluster
	I0708 12:58:31.529134    3424 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0708 12:58:31.529149    3424 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0708 12:58:31.529159    3424 cache.go:56] Caching tarball of preloaded images
	I0708 12:58:31.529213    3424 preload.go:173] Found /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0708 12:58:31.529218    3424 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0708 12:58:31.529270    3424 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/multinode-969000/config.json ...
	I0708 12:58:31.529699    3424 start.go:360] acquireMachinesLock for multinode-969000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 12:58:31.529734    3424 start.go:364] duration metric: took 29.208µs to acquireMachinesLock for "multinode-969000"
	I0708 12:58:31.529743    3424 start.go:96] Skipping create...Using existing machine configuration
	I0708 12:58:31.529751    3424 fix.go:54] fixHost starting: 
	I0708 12:58:31.529867    3424 fix.go:112] recreateIfNeeded on multinode-969000: state=Stopped err=<nil>
	W0708 12:58:31.529877    3424 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 12:58:31.538145    3424 out.go:177] * Restarting existing qemu2 VM for "multinode-969000" ...
	I0708 12:58:31.542183    3424 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/multinode-969000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/multinode-969000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/multinode-969000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:d3:85:5a:18:86 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/multinode-969000/disk.qcow2
	I0708 12:58:31.544236    3424 main.go:141] libmachine: STDOUT: 
	I0708 12:58:31.544254    3424 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 12:58:31.544283    3424 fix.go:56] duration metric: took 14.533416ms for fixHost
	I0708 12:58:31.544288    3424 start.go:83] releasing machines lock for "multinode-969000", held for 14.550291ms
	W0708 12:58:31.544294    3424 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0708 12:58:31.544322    3424 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 12:58:31.544327    3424 start.go:728] Will try again in 5 seconds ...
	I0708 12:58:36.546305    3424 start.go:360] acquireMachinesLock for multinode-969000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 12:58:36.546585    3424 start.go:364] duration metric: took 229.916µs to acquireMachinesLock for "multinode-969000"
	I0708 12:58:36.546718    3424 start.go:96] Skipping create...Using existing machine configuration
	I0708 12:58:36.546735    3424 fix.go:54] fixHost starting: 
	I0708 12:58:36.547413    3424 fix.go:112] recreateIfNeeded on multinode-969000: state=Stopped err=<nil>
	W0708 12:58:36.547434    3424 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 12:58:36.554819    3424 out.go:177] * Restarting existing qemu2 VM for "multinode-969000" ...
	I0708 12:58:36.558960    3424 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/multinode-969000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/multinode-969000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/multinode-969000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:d3:85:5a:18:86 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/multinode-969000/disk.qcow2
	I0708 12:58:36.565647    3424 main.go:141] libmachine: STDOUT: 
	I0708 12:58:36.565689    3424 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 12:58:36.565744    3424 fix.go:56] duration metric: took 19.007417ms for fixHost
	I0708 12:58:36.565762    3424 start.go:83] releasing machines lock for "multinode-969000", held for 19.1545ms
	W0708 12:58:36.565925    3424 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-969000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-969000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 12:58:36.573811    3424 out.go:177] 
	W0708 12:58:36.577824    3424 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0708 12:58:36.577843    3424 out.go:239] * 
	* 
	W0708 12:58:36.579764    3424 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 12:58:36.589765    3424 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-969000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-969000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000: exit status 7 (32.096292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.67s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-969000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-969000 node delete m03: exit status 83 (41.263292ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-969000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-969000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-969000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-969000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-969000 status --alsologtostderr: exit status 7 (29.198625ms)

                                                
                                                
-- stdout --
	multinode-969000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 12:58:36.771618    3438 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:58:36.771768    3438 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:58:36.771771    3438 out.go:304] Setting ErrFile to fd 2...
	I0708 12:58:36.771773    3438 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:58:36.771930    3438 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:58:36.772041    3438 out.go:298] Setting JSON to false
	I0708 12:58:36.772052    3438 mustload.go:65] Loading cluster: multinode-969000
	I0708 12:58:36.772108    3438 notify.go:220] Checking for updates...
	I0708 12:58:36.772247    3438 config.go:182] Loaded profile config "multinode-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:58:36.772253    3438 status.go:255] checking status of multinode-969000 ...
	I0708 12:58:36.772475    3438 status.go:330] multinode-969000 host status = "Stopped" (err=<nil>)
	I0708 12:58:36.772479    3438 status.go:343] host is not running, skipping remaining checks
	I0708 12:58:36.772482    3438 status.go:257] multinode-969000 status: &{Name:multinode-969000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-969000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000: exit status 7 (29.446583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-969000 stop
E0708 12:58:39.095152    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/functional-183000/client.crt: no such file or directory
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-969000 stop: (3.017389292s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-969000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-969000 status: exit status 7 (62.42ms)

                                                
                                                
-- stdout --
	multinode-969000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-969000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-969000 status --alsologtostderr: exit status 7 (31.70975ms)

                                                
                                                
-- stdout --
	multinode-969000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 12:58:39.913196    3462 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:58:39.913354    3462 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:58:39.913357    3462 out.go:304] Setting ErrFile to fd 2...
	I0708 12:58:39.913360    3462 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:58:39.913493    3462 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:58:39.913614    3462 out.go:298] Setting JSON to false
	I0708 12:58:39.913627    3462 mustload.go:65] Loading cluster: multinode-969000
	I0708 12:58:39.913686    3462 notify.go:220] Checking for updates...
	I0708 12:58:39.913830    3462 config.go:182] Loaded profile config "multinode-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:58:39.913837    3462 status.go:255] checking status of multinode-969000 ...
	I0708 12:58:39.914067    3462 status.go:330] multinode-969000 host status = "Stopped" (err=<nil>)
	I0708 12:58:39.914070    3462 status.go:343] host is not running, skipping remaining checks
	I0708 12:58:39.914072    3462 status.go:257] multinode-969000 status: &{Name:multinode-969000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-969000 status --alsologtostderr": multinode-969000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-969000 status --alsologtostderr": multinode-969000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000: exit status 7 (28.616458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.14s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-969000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-969000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.178927792s)

                                                
                                                
-- stdout --
	* [multinode-969000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-969000" primary control-plane node in "multinode-969000" cluster
	* Restarting existing qemu2 VM for "multinode-969000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-969000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 12:58:39.971256    3466 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:58:39.971393    3466 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:58:39.971396    3466 out.go:304] Setting ErrFile to fd 2...
	I0708 12:58:39.971398    3466 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:58:39.971514    3466 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:58:39.972576    3466 out.go:298] Setting JSON to false
	I0708 12:58:39.988563    3466 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3487,"bootTime":1720465232,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0708 12:58:39.988628    3466 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0708 12:58:39.994019    3466 out.go:177] * [multinode-969000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0708 12:58:40.000952    3466 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 12:58:40.001031    3466 notify.go:220] Checking for updates...
	I0708 12:58:40.007940    3466 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 12:58:40.010942    3466 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0708 12:58:40.013961    3466 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 12:58:40.016925    3466 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	I0708 12:58:40.019984    3466 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 12:58:40.023238    3466 config.go:182] Loaded profile config "multinode-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:58:40.023491    3466 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 12:58:40.027900    3466 out.go:177] * Using the qemu2 driver based on existing profile
	I0708 12:58:40.033927    3466 start.go:297] selected driver: qemu2
	I0708 12:58:40.033938    3466 start.go:901] validating driver "qemu2" against &{Name:multinode-969000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:multinode-969000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 12:58:40.033983    3466 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 12:58:40.036253    3466 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 12:58:40.036298    3466 cni.go:84] Creating CNI manager for ""
	I0708 12:58:40.036304    3466 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0708 12:58:40.036346    3466 start.go:340] cluster config:
	{Name:multinode-969000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-969000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 12:58:40.039873    3466 iso.go:125] acquiring lock: {Name:mk0270d312faa6a295feea241390baaf586d8510 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 12:58:40.046926    3466 out.go:177] * Starting "multinode-969000" primary control-plane node in "multinode-969000" cluster
	I0708 12:58:40.050918    3466 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0708 12:58:40.050935    3466 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0708 12:58:40.050943    3466 cache.go:56] Caching tarball of preloaded images
	I0708 12:58:40.050993    3466 preload.go:173] Found /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0708 12:58:40.050999    3466 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0708 12:58:40.051071    3466 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/multinode-969000/config.json ...
	I0708 12:58:40.051495    3466 start.go:360] acquireMachinesLock for multinode-969000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 12:58:40.051522    3466 start.go:364] duration metric: took 21.959µs to acquireMachinesLock for "multinode-969000"
	I0708 12:58:40.051530    3466 start.go:96] Skipping create...Using existing machine configuration
	I0708 12:58:40.051539    3466 fix.go:54] fixHost starting: 
	I0708 12:58:40.051656    3466 fix.go:112] recreateIfNeeded on multinode-969000: state=Stopped err=<nil>
	W0708 12:58:40.051665    3466 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 12:58:40.055967    3466 out.go:177] * Restarting existing qemu2 VM for "multinode-969000" ...
	I0708 12:58:40.063961    3466 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/multinode-969000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/multinode-969000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/multinode-969000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:d3:85:5a:18:86 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/multinode-969000/disk.qcow2
	I0708 12:58:40.066031    3466 main.go:141] libmachine: STDOUT: 
	I0708 12:58:40.066049    3466 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 12:58:40.066077    3466 fix.go:56] duration metric: took 14.539625ms for fixHost
	I0708 12:58:40.066082    3466 start.go:83] releasing machines lock for "multinode-969000", held for 14.5565ms
	W0708 12:58:40.066088    3466 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0708 12:58:40.066118    3466 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 12:58:40.066124    3466 start.go:728] Will try again in 5 seconds ...
	I0708 12:58:45.066374    3466 start.go:360] acquireMachinesLock for multinode-969000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 12:58:45.066762    3466 start.go:364] duration metric: took 318.458µs to acquireMachinesLock for "multinode-969000"
	I0708 12:58:45.066932    3466 start.go:96] Skipping create...Using existing machine configuration
	I0708 12:58:45.066957    3466 fix.go:54] fixHost starting: 
	I0708 12:58:45.067658    3466 fix.go:112] recreateIfNeeded on multinode-969000: state=Stopped err=<nil>
	W0708 12:58:45.067682    3466 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 12:58:45.075103    3466 out.go:177] * Restarting existing qemu2 VM for "multinode-969000" ...
	I0708 12:58:45.079177    3466 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/multinode-969000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/multinode-969000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/multinode-969000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:d3:85:5a:18:86 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/multinode-969000/disk.qcow2
	I0708 12:58:45.088088    3466 main.go:141] libmachine: STDOUT: 
	I0708 12:58:45.088173    3466 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 12:58:45.088252    3466 fix.go:56] duration metric: took 21.295417ms for fixHost
	I0708 12:58:45.088276    3466 start.go:83] releasing machines lock for "multinode-969000", held for 21.489042ms
	W0708 12:58:45.088444    3466 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-969000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-969000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 12:58:45.094496    3466 out.go:177] 
	W0708 12:58:45.098143    3466 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0708 12:58:45.098215    3466 out.go:239] * 
	* 
	W0708 12:58:45.100858    3466 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 12:58:45.109141    3466 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-969000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000: exit status 7 (65.485625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-969000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-969000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-969000-m01 --driver=qemu2 : exit status 80 (9.845652542s)

                                                
                                                
-- stdout --
	* [multinode-969000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-969000-m01" primary control-plane node in "multinode-969000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-969000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-969000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-969000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-969000-m02 --driver=qemu2 : exit status 80 (10.068771584s)

                                                
                                                
-- stdout --
	* [multinode-969000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-969000-m02" primary control-plane node in "multinode-969000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-969000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-969000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-969000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-969000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-969000: exit status 83 (78.840167ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-969000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-969000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-969000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000: exit status 7 (30.639958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.13s)

                                                
                                    
x
+
TestPreload (10.09s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-380000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-380000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.944068209s)

                                                
                                                
-- stdout --
	* [test-preload-380000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-380000" primary control-plane node in "test-preload-380000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-380000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 12:59:05.456463    3521 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:59:05.456592    3521 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:59:05.456596    3521 out.go:304] Setting ErrFile to fd 2...
	I0708 12:59:05.456599    3521 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:59:05.456729    3521 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:59:05.457796    3521 out.go:298] Setting JSON to false
	I0708 12:59:05.473889    3521 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3513,"bootTime":1720465232,"procs":451,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0708 12:59:05.473949    3521 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0708 12:59:05.479625    3521 out.go:177] * [test-preload-380000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0708 12:59:05.487564    3521 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 12:59:05.487610    3521 notify.go:220] Checking for updates...
	I0708 12:59:05.492477    3521 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 12:59:05.495512    3521 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0708 12:59:05.498520    3521 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 12:59:05.501519    3521 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	I0708 12:59:05.504474    3521 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 12:59:05.507903    3521 config.go:182] Loaded profile config "multinode-969000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:59:05.507947    3521 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 12:59:05.512426    3521 out.go:177] * Using the qemu2 driver based on user configuration
	I0708 12:59:05.519524    3521 start.go:297] selected driver: qemu2
	I0708 12:59:05.519530    3521 start.go:901] validating driver "qemu2" against <nil>
	I0708 12:59:05.519538    3521 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 12:59:05.521819    3521 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0708 12:59:05.525443    3521 out.go:177] * Automatically selected the socket_vmnet network
	I0708 12:59:05.528594    3521 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 12:59:05.528632    3521 cni.go:84] Creating CNI manager for ""
	I0708 12:59:05.528645    3521 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0708 12:59:05.528650    3521 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0708 12:59:05.528690    3521 start.go:340] cluster config:
	{Name:test-preload-380000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-380000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 12:59:05.532538    3521 iso.go:125] acquiring lock: {Name:mk0270d312faa6a295feea241390baaf586d8510 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 12:59:05.538485    3521 out.go:177] * Starting "test-preload-380000" primary control-plane node in "test-preload-380000" cluster
	I0708 12:59:05.542487    3521 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0708 12:59:05.542560    3521 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/test-preload-380000/config.json ...
	I0708 12:59:05.542585    3521 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/test-preload-380000/config.json: {Name:mk977125018f50d3febce36dc69cd0316024336d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:59:05.542576    3521 cache.go:107] acquiring lock: {Name:mk43c5524ca8b3797c7d4740d3a67d2fb050229b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 12:59:05.542576    3521 cache.go:107] acquiring lock: {Name:mk48eaa7950e96669e6f1d9da14b3b30130cdc0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 12:59:05.542590    3521 cache.go:107] acquiring lock: {Name:mk5fa6493eb039ca04e125a68a5478c88a6ce96f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 12:59:05.542810    3521 cache.go:107] acquiring lock: {Name:mk72ece6298ebdf28ed9e1f4df8dbfab350b1dff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 12:59:05.542857    3521 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0708 12:59:05.542859    3521 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0708 12:59:05.542859    3521 cache.go:107] acquiring lock: {Name:mk7b51e16d3383f3fe38837f1aea74100c1ca5db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 12:59:05.542898    3521 cache.go:107] acquiring lock: {Name:mk9433448061e1f83773668c27384976f4045ebd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 12:59:05.542893    3521 cache.go:107] acquiring lock: {Name:mk8ba6b0cf1b501b9fd51cbe150c96497f5ff36d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 12:59:05.542894    3521 cache.go:107] acquiring lock: {Name:mk7662fafe623abb483d2a2a5c1e47d386ed7190 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 12:59:05.542842    3521 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 12:59:05.543004    3521 start.go:360] acquireMachinesLock for test-preload-380000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 12:59:05.543048    3521 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0708 12:59:05.543068    3521 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0708 12:59:05.543087    3521 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0708 12:59:05.543122    3521 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0708 12:59:05.543160    3521 start.go:364] duration metric: took 150.5µs to acquireMachinesLock for "test-preload-380000"
	I0708 12:59:05.543171    3521 start.go:93] Provisioning new machine with config: &{Name:test-preload-380000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-380000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 12:59:05.543224    3521 start.go:125] createHost starting for "" (driver="qemu2")
	I0708 12:59:05.543227    3521 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0708 12:59:05.551498    3521 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0708 12:59:05.555799    3521 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0708 12:59:05.555859    3521 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0708 12:59:05.555922    3521 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0708 12:59:05.556492    3521 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 12:59:05.558705    3521 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0708 12:59:05.558903    3521 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0708 12:59:05.558948    3521 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0708 12:59:05.558973    3521 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0708 12:59:05.569446    3521 start.go:159] libmachine.API.Create for "test-preload-380000" (driver="qemu2")
	I0708 12:59:05.569463    3521 client.go:168] LocalClient.Create starting
	I0708 12:59:05.569540    3521 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem
	I0708 12:59:05.569571    3521 main.go:141] libmachine: Decoding PEM data...
	I0708 12:59:05.569579    3521 main.go:141] libmachine: Parsing certificate...
	I0708 12:59:05.569624    3521 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem
	I0708 12:59:05.569648    3521 main.go:141] libmachine: Decoding PEM data...
	I0708 12:59:05.569657    3521 main.go:141] libmachine: Parsing certificate...
	I0708 12:59:05.569962    3521 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19195-1270/.minikube/cache/iso/arm64/minikube-v1.33.1-1720011972-19186-arm64.iso...
	I0708 12:59:05.856319    3521 main.go:141] libmachine: Creating SSH key...
	I0708 12:59:05.961543    3521 main.go:141] libmachine: Creating Disk image...
	I0708 12:59:05.961563    3521 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0708 12:59:05.961796    3521 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/test-preload-380000/disk.qcow2.raw /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/test-preload-380000/disk.qcow2
	I0708 12:59:05.972067    3521 main.go:141] libmachine: STDOUT: 
	I0708 12:59:05.972083    3521 main.go:141] libmachine: STDERR: 
	I0708 12:59:05.972129    3521 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/test-preload-380000/disk.qcow2 +20000M
	I0708 12:59:05.980786    3521 main.go:141] libmachine: STDOUT: Image resized.
	
	I0708 12:59:05.980809    3521 main.go:141] libmachine: STDERR: 
	I0708 12:59:05.980830    3521 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/test-preload-380000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/test-preload-380000/disk.qcow2
	I0708 12:59:05.980833    3521 main.go:141] libmachine: Starting QEMU VM...
	I0708 12:59:05.980863    3521 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/test-preload-380000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/test-preload-380000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/test-preload-380000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:7c:a7:21:ee:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/test-preload-380000/disk.qcow2
	I0708 12:59:05.983236    3521 main.go:141] libmachine: STDOUT: 
	I0708 12:59:05.983257    3521 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 12:59:05.983282    3521 client.go:171] duration metric: took 413.824667ms to LocalClient.Create
	W0708 12:59:06.215171    3521 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0708 12:59:06.215199    3521 cache.go:162] opening:  /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0708 12:59:06.222060    3521 cache.go:162] opening:  /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0708 12:59:06.223308    3521 cache.go:162] opening:  /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0708 12:59:06.247562    3521 cache.go:162] opening:  /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0708 12:59:06.257428    3521 cache.go:162] opening:  /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0708 12:59:06.313640    3521 cache.go:162] opening:  /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0708 12:59:06.336941    3521 cache.go:157] /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0708 12:59:06.336964    3521 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 794.086208ms
	I0708 12:59:06.336982    3521 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0708 12:59:06.358420    3521 cache.go:162] opening:  /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	W0708 12:59:06.595105    3521 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0708 12:59:06.595189    3521 cache.go:162] opening:  /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0708 12:59:06.933539    3521 cache.go:157] /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0708 12:59:06.933592    3521 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.391054s
	I0708 12:59:06.933625    3521 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0708 12:59:07.663244    3521 cache.go:157] /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0708 12:59:07.663292    3521 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.120484666s
	I0708 12:59:07.663336    3521 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0708 12:59:07.983485    3521 start.go:128] duration metric: took 2.440306292s to createHost
	I0708 12:59:07.983541    3521 start.go:83] releasing machines lock for "test-preload-380000", held for 2.440441625s
	W0708 12:59:07.983599    3521 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 12:59:07.995539    3521 out.go:177] * Deleting "test-preload-380000" in qemu2 ...
	W0708 12:59:08.024047    3521 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 12:59:08.024072    3521 start.go:728] Will try again in 5 seconds ...
	I0708 12:59:09.077078    3521 cache.go:157] /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0708 12:59:09.077149    3521 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.534409708s
	I0708 12:59:09.077176    3521 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0708 12:59:09.249931    3521 cache.go:157] /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0708 12:59:09.249973    3521 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 3.707312208s
	I0708 12:59:09.249996    3521 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0708 12:59:10.259594    3521 cache.go:157] /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0708 12:59:10.259648    3521 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.71720525s
	I0708 12:59:10.259672    3521 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0708 12:59:12.150799    3521 cache.go:157] /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0708 12:59:12.150875    3521 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.608474125s
	I0708 12:59:12.150933    3521 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0708 12:59:13.024646    3521 start.go:360] acquireMachinesLock for test-preload-380000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 12:59:13.025051    3521 start.go:364] duration metric: took 323.125µs to acquireMachinesLock for "test-preload-380000"
	I0708 12:59:13.025181    3521 start.go:93] Provisioning new machine with config: &{Name:test-preload-380000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-380000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 12:59:13.025375    3521 start.go:125] createHost starting for "" (driver="qemu2")
	I0708 12:59:13.033960    3521 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0708 12:59:13.081960    3521 start.go:159] libmachine.API.Create for "test-preload-380000" (driver="qemu2")
	I0708 12:59:13.082007    3521 client.go:168] LocalClient.Create starting
	I0708 12:59:13.082131    3521 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem
	I0708 12:59:13.082188    3521 main.go:141] libmachine: Decoding PEM data...
	I0708 12:59:13.082211    3521 main.go:141] libmachine: Parsing certificate...
	I0708 12:59:13.082278    3521 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem
	I0708 12:59:13.082322    3521 main.go:141] libmachine: Decoding PEM data...
	I0708 12:59:13.082341    3521 main.go:141] libmachine: Parsing certificate...
	I0708 12:59:13.082873    3521 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19195-1270/.minikube/cache/iso/arm64/minikube-v1.33.1-1720011972-19186-arm64.iso...
	I0708 12:59:13.236807    3521 main.go:141] libmachine: Creating SSH key...
	I0708 12:59:13.300669    3521 main.go:141] libmachine: Creating Disk image...
	I0708 12:59:13.300674    3521 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0708 12:59:13.300875    3521 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/test-preload-380000/disk.qcow2.raw /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/test-preload-380000/disk.qcow2
	I0708 12:59:13.310252    3521 main.go:141] libmachine: STDOUT: 
	I0708 12:59:13.310270    3521 main.go:141] libmachine: STDERR: 
	I0708 12:59:13.310327    3521 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/test-preload-380000/disk.qcow2 +20000M
	I0708 12:59:13.318395    3521 main.go:141] libmachine: STDOUT: Image resized.
	
	I0708 12:59:13.318409    3521 main.go:141] libmachine: STDERR: 
	I0708 12:59:13.318425    3521 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/test-preload-380000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/test-preload-380000/disk.qcow2
	I0708 12:59:13.318427    3521 main.go:141] libmachine: Starting QEMU VM...
	I0708 12:59:13.318471    3521 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/test-preload-380000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/test-preload-380000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/test-preload-380000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:35:35:49:4b:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/test-preload-380000/disk.qcow2
	I0708 12:59:13.320172    3521 main.go:141] libmachine: STDOUT: 
	I0708 12:59:13.320188    3521 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 12:59:13.320204    3521 client.go:171] duration metric: took 238.197833ms to LocalClient.Create
	I0708 12:59:14.781914    3521 cache.go:157] /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0708 12:59:14.782004    3521 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.239420292s
	I0708 12:59:14.782059    3521 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0708 12:59:14.782110    3521 cache.go:87] Successfully saved all images to host disk.
	I0708 12:59:15.322409    3521 start.go:128] duration metric: took 2.297027167s to createHost
	I0708 12:59:15.322511    3521 start.go:83] releasing machines lock for "test-preload-380000", held for 2.29750075s
	W0708 12:59:15.322880    3521 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-380000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-380000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 12:59:15.338355    3521 out.go:177] 
	W0708 12:59:15.342456    3521 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0708 12:59:15.342480    3521 out.go:239] * 
	* 
	W0708 12:59:15.345232    3521 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 12:59:15.358282    3521 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-380000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-07-08 12:59:15.375558 -0700 PDT m=+1854.828004751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-380000 -n test-preload-380000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-380000 -n test-preload-380000: exit status 7 (66.064208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-380000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-380000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-380000
--- FAIL: TestPreload (10.09s)

                                                
                                    
x
+
TestScheduledStopUnix (10s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-704000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-704000 --memory=2048 --driver=qemu2 : exit status 80 (9.858009292s)

                                                
                                                
-- stdout --
	* [scheduled-stop-704000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-704000" primary control-plane node in "scheduled-stop-704000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-704000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-704000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-704000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-704000" primary control-plane node in "scheduled-stop-704000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-704000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-704000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-07-08 12:59:25.3812 -0700 PDT m=+1864.833932626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-704000 -n scheduled-stop-704000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-704000 -n scheduled-stop-704000: exit status 7 (65.901166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-704000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-704000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-704000
--- FAIL: TestScheduledStopUnix (10.00s)

                                                
                                    
x
+
TestSkaffold (12.78s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe4212585743 version
skaffold_test.go:63: skaffold version: v2.12.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-186000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-186000 --memory=2600 --driver=qemu2 : exit status 80 (9.82841625s)

                                                
                                                
-- stdout --
	* [skaffold-186000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-186000" primary control-plane node in "skaffold-186000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-186000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-186000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-186000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-186000" primary control-plane node in "skaffold-186000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-186000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-186000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-07-08 12:59:38.16437 -0700 PDT m=+1877.617468376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-186000 -n skaffold-186000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-186000 -n skaffold-186000: exit status 7 (60.800292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-186000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-186000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-186000
--- FAIL: TestSkaffold (12.78s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (599.87s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3856640623 start -p running-upgrade-129000 --memory=2200 --vm-driver=qemu2 
E0708 13:00:52.016999    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/addons-443000/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3856640623 start -p running-upgrade-129000 --memory=2200 --vm-driver=qemu2 : (50.629235334s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-129000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0708 13:02:16.015705    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/functional-183000/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-129000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m33.845452459s)

                                                
                                                
-- stdout --
	* [running-upgrade-129000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-129000" primary control-plane node in "running-upgrade-129000" cluster
	* Updating the running qemu2 "running-upgrade-129000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 13:01:11.092143    3932 out.go:291] Setting OutFile to fd 1 ...
	I0708 13:01:11.092348    3932 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:01:11.092351    3932 out.go:304] Setting ErrFile to fd 2...
	I0708 13:01:11.092354    3932 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:01:11.092482    3932 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 13:01:11.093634    3932 out.go:298] Setting JSON to false
	I0708 13:01:11.111902    3932 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3639,"bootTime":1720465232,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0708 13:01:11.111968    3932 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0708 13:01:11.116363    3932 out.go:177] * [running-upgrade-129000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0708 13:01:11.123342    3932 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 13:01:11.123401    3932 notify.go:220] Checking for updates...
	I0708 13:01:11.130262    3932 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 13:01:11.133339    3932 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0708 13:01:11.136360    3932 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 13:01:11.139279    3932 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	I0708 13:01:11.142387    3932 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 13:01:11.145674    3932 config.go:182] Loaded profile config "running-upgrade-129000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0708 13:01:11.149261    3932 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0708 13:01:11.152343    3932 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 13:01:11.156380    3932 out.go:177] * Using the qemu2 driver based on existing profile
	I0708 13:01:11.163303    3932 start.go:297] selected driver: qemu2
	I0708 13:01:11.163309    3932 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-129000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50391 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-129000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0708 13:01:11.163361    3932 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 13:01:11.166085    3932 cni.go:84] Creating CNI manager for ""
	I0708 13:01:11.166101    3932 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0708 13:01:11.166127    3932 start.go:340] cluster config:
	{Name:running-upgrade-129000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50391 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-129000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0708 13:01:11.166174    3932 iso.go:125] acquiring lock: {Name:mk0270d312faa6a295feea241390baaf586d8510 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 13:01:11.171326    3932 out.go:177] * Starting "running-upgrade-129000" primary control-plane node in "running-upgrade-129000" cluster
	I0708 13:01:11.175323    3932 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0708 13:01:11.175337    3932 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0708 13:01:11.175344    3932 cache.go:56] Caching tarball of preloaded images
	I0708 13:01:11.175403    3932 preload.go:173] Found /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0708 13:01:11.175408    3932 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0708 13:01:11.175477    3932 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/running-upgrade-129000/config.json ...
	I0708 13:01:11.175912    3932 start.go:360] acquireMachinesLock for running-upgrade-129000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 13:01:11.175943    3932 start.go:364] duration metric: took 26.041µs to acquireMachinesLock for "running-upgrade-129000"
	I0708 13:01:11.175951    3932 start.go:96] Skipping create...Using existing machine configuration
	I0708 13:01:11.175957    3932 fix.go:54] fixHost starting: 
	I0708 13:01:11.176578    3932 fix.go:112] recreateIfNeeded on running-upgrade-129000: state=Running err=<nil>
	W0708 13:01:11.176587    3932 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 13:01:11.181247    3932 out.go:177] * Updating the running qemu2 "running-upgrade-129000" VM ...
	I0708 13:01:11.189114    3932 machine.go:94] provisionDockerMachine start ...
	I0708 13:01:11.189156    3932 main.go:141] libmachine: Using SSH client type: native
	I0708 13:01:11.189291    3932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102fd2920] 0x102fd5180 <nil>  [] 0s} localhost 50359 <nil> <nil>}
	I0708 13:01:11.189296    3932 main.go:141] libmachine: About to run SSH command:
	hostname
	I0708 13:01:11.261407    3932 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-129000
	
	I0708 13:01:11.261421    3932 buildroot.go:166] provisioning hostname "running-upgrade-129000"
	I0708 13:01:11.261466    3932 main.go:141] libmachine: Using SSH client type: native
	I0708 13:01:11.261601    3932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102fd2920] 0x102fd5180 <nil>  [] 0s} localhost 50359 <nil> <nil>}
	I0708 13:01:11.261607    3932 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-129000 && echo "running-upgrade-129000" | sudo tee /etc/hostname
	I0708 13:01:11.336993    3932 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-129000
	
	I0708 13:01:11.337034    3932 main.go:141] libmachine: Using SSH client type: native
	I0708 13:01:11.337139    3932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102fd2920] 0x102fd5180 <nil>  [] 0s} localhost 50359 <nil> <nil>}
	I0708 13:01:11.337147    3932 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-129000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-129000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-129000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 13:01:11.407418    3932 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 13:01:11.407427    3932 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19195-1270/.minikube CaCertPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19195-1270/.minikube}
	I0708 13:01:11.407438    3932 buildroot.go:174] setting up certificates
	I0708 13:01:11.407444    3932 provision.go:84] configureAuth start
	I0708 13:01:11.407448    3932 provision.go:143] copyHostCerts
	I0708 13:01:11.407517    3932 exec_runner.go:144] found /Users/jenkins/minikube-integration/19195-1270/.minikube/cert.pem, removing ...
	I0708 13:01:11.407525    3932 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19195-1270/.minikube/cert.pem
	I0708 13:01:11.407658    3932 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19195-1270/.minikube/cert.pem (1123 bytes)
	I0708 13:01:11.407846    3932 exec_runner.go:144] found /Users/jenkins/minikube-integration/19195-1270/.minikube/key.pem, removing ...
	I0708 13:01:11.407850    3932 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19195-1270/.minikube/key.pem
	I0708 13:01:11.407903    3932 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19195-1270/.minikube/key.pem (1675 bytes)
	I0708 13:01:11.408009    3932 exec_runner.go:144] found /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.pem, removing ...
	I0708 13:01:11.408012    3932 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.pem
	I0708 13:01:11.408063    3932 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.pem (1078 bytes)
	I0708 13:01:11.408165    3932 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-129000 san=[127.0.0.1 localhost minikube running-upgrade-129000]
	I0708 13:01:11.542701    3932 provision.go:177] copyRemoteCerts
	I0708 13:01:11.542747    3932 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 13:01:11.542756    3932 sshutil.go:53] new ssh client: &{IP:localhost Port:50359 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/running-upgrade-129000/id_rsa Username:docker}
	I0708 13:01:11.581787    3932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 13:01:11.588706    3932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0708 13:01:11.595508    3932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0708 13:01:11.601922    3932 provision.go:87] duration metric: took 194.478333ms to configureAuth
	I0708 13:01:11.601933    3932 buildroot.go:189] setting minikube options for container-runtime
	I0708 13:01:11.602042    3932 config.go:182] Loaded profile config "running-upgrade-129000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0708 13:01:11.602075    3932 main.go:141] libmachine: Using SSH client type: native
	I0708 13:01:11.602170    3932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102fd2920] 0x102fd5180 <nil>  [] 0s} localhost 50359 <nil> <nil>}
	I0708 13:01:11.602175    3932 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0708 13:01:11.673362    3932 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0708 13:01:11.673373    3932 buildroot.go:70] root file system type: tmpfs
	I0708 13:01:11.673422    3932 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0708 13:01:11.673467    3932 main.go:141] libmachine: Using SSH client type: native
	I0708 13:01:11.673593    3932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102fd2920] 0x102fd5180 <nil>  [] 0s} localhost 50359 <nil> <nil>}
	I0708 13:01:11.673625    3932 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0708 13:01:11.748138    3932 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0708 13:01:11.748192    3932 main.go:141] libmachine: Using SSH client type: native
	I0708 13:01:11.748305    3932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102fd2920] 0x102fd5180 <nil>  [] 0s} localhost 50359 <nil> <nil>}
	I0708 13:01:11.748316    3932 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0708 13:01:11.819159    3932 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 13:01:11.819172    3932 machine.go:97] duration metric: took 630.070333ms to provisionDockerMachine
	I0708 13:01:11.819183    3932 start.go:293] postStartSetup for "running-upgrade-129000" (driver="qemu2")
	I0708 13:01:11.819189    3932 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 13:01:11.819251    3932 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 13:01:11.819259    3932 sshutil.go:53] new ssh client: &{IP:localhost Port:50359 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/running-upgrade-129000/id_rsa Username:docker}
	I0708 13:01:11.858430    3932 ssh_runner.go:195] Run: cat /etc/os-release
	I0708 13:01:11.859876    3932 info.go:137] Remote host: Buildroot 2021.02.12
	I0708 13:01:11.859885    3932 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19195-1270/.minikube/addons for local assets ...
	I0708 13:01:11.860162    3932 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19195-1270/.minikube/files for local assets ...
	I0708 13:01:11.860314    3932 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem -> 17672.pem in /etc/ssl/certs
	I0708 13:01:11.860451    3932 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0708 13:01:11.862825    3932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem --> /etc/ssl/certs/17672.pem (1708 bytes)
	I0708 13:01:11.869634    3932 start.go:296] duration metric: took 50.44825ms for postStartSetup
	I0708 13:01:11.869647    3932 fix.go:56] duration metric: took 693.711542ms for fixHost
	I0708 13:01:11.869672    3932 main.go:141] libmachine: Using SSH client type: native
	I0708 13:01:11.869765    3932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102fd2920] 0x102fd5180 <nil>  [] 0s} localhost 50359 <nil> <nil>}
	I0708 13:01:11.869770    3932 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0708 13:01:11.939891    3932 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720468871.637646055
	
	I0708 13:01:11.939898    3932 fix.go:216] guest clock: 1720468871.637646055
	I0708 13:01:11.939902    3932 fix.go:229] Guest: 2024-07-08 13:01:11.637646055 -0700 PDT Remote: 2024-07-08 13:01:11.869649 -0700 PDT m=+0.797106292 (delta=-232.002945ms)
	I0708 13:01:11.939914    3932 fix.go:200] guest clock delta is within tolerance: -232.002945ms
	I0708 13:01:11.939917    3932 start.go:83] releasing machines lock for "running-upgrade-129000", held for 763.991209ms
	I0708 13:01:11.939965    3932 ssh_runner.go:195] Run: cat /version.json
	I0708 13:01:11.939975    3932 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0708 13:01:11.939972    3932 sshutil.go:53] new ssh client: &{IP:localhost Port:50359 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/running-upgrade-129000/id_rsa Username:docker}
	I0708 13:01:11.939991    3932 sshutil.go:53] new ssh client: &{IP:localhost Port:50359 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/running-upgrade-129000/id_rsa Username:docker}
	W0708 13:01:11.940544    3932 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50359: connect: connection refused
	I0708 13:01:11.940566    3932 retry.go:31] will retry after 218.441626ms: dial tcp [::1]:50359: connect: connection refused
	W0708 13:01:11.977965    3932 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0708 13:01:11.978029    3932 ssh_runner.go:195] Run: systemctl --version
	I0708 13:01:11.979895    3932 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0708 13:01:11.981434    3932 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0708 13:01:11.981460    3932 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0708 13:01:11.984175    3932 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0708 13:01:11.988198    3932 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0708 13:01:11.988204    3932 start.go:494] detecting cgroup driver to use...
	I0708 13:01:11.988268    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 13:01:11.993141    3932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0708 13:01:11.996155    3932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0708 13:01:11.998947    3932 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0708 13:01:11.998968    3932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0708 13:01:12.002084    3932 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0708 13:01:12.005173    3932 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0708 13:01:12.007984    3932 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0708 13:01:12.015110    3932 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0708 13:01:12.018213    3932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0708 13:01:12.020928    3932 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0708 13:01:12.023786    3932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0708 13:01:12.026412    3932 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 13:01:12.029350    3932 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 13:01:12.032040    3932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 13:01:12.124046    3932 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0708 13:01:12.131949    3932 start.go:494] detecting cgroup driver to use...
	I0708 13:01:12.132009    3932 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0708 13:01:12.142171    3932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 13:01:12.146493    3932 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0708 13:01:12.152692    3932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 13:01:12.157210    3932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0708 13:01:12.162181    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 13:01:12.170499    3932 ssh_runner.go:195] Run: which cri-dockerd
	I0708 13:01:12.171781    3932 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0708 13:01:12.174320    3932 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0708 13:01:12.179556    3932 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0708 13:01:12.271299    3932 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0708 13:01:12.362595    3932 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0708 13:01:12.362654    3932 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0708 13:01:12.368115    3932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 13:01:12.445114    3932 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0708 13:01:25.225828    3932 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.781063167s)
	I0708 13:01:25.225897    3932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0708 13:01:25.230803    3932 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0708 13:01:25.238377    3932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0708 13:01:25.243671    3932 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0708 13:01:25.337034    3932 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0708 13:01:25.414599    3932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 13:01:25.478097    3932 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0708 13:01:25.483875    3932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0708 13:01:25.488527    3932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 13:01:25.561483    3932 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0708 13:01:25.604555    3932 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0708 13:01:25.604629    3932 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0708 13:01:25.606579    3932 start.go:562] Will wait 60s for crictl version
	I0708 13:01:25.606618    3932 ssh_runner.go:195] Run: which crictl
	I0708 13:01:25.608001    3932 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0708 13:01:25.619916    3932 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0708 13:01:25.619984    3932 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0708 13:01:25.633048    3932 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0708 13:01:25.650213    3932 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0708 13:01:25.650340    3932 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0708 13:01:25.651746    3932 kubeadm.go:877] updating cluster {Name:running-upgrade-129000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50391 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-129000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0708 13:01:25.651790    3932 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0708 13:01:25.651833    3932 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0708 13:01:25.662075    3932 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0708 13:01:25.662084    3932 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0708 13:01:25.662128    3932 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0708 13:01:25.665192    3932 ssh_runner.go:195] Run: which lz4
	I0708 13:01:25.666461    3932 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0708 13:01:25.667828    3932 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0708 13:01:25.667839    3932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0708 13:01:26.545363    3932 docker.go:649] duration metric: took 878.953417ms to copy over tarball
	I0708 13:01:26.545418    3932 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0708 13:01:27.741025    3932 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.195621375s)
	I0708 13:01:27.741038    3932 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0708 13:01:27.758751    3932 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0708 13:01:27.761951    3932 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0708 13:01:27.767142    3932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 13:01:27.860818    3932 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0708 13:01:29.047733    3932 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.186934917s)
	I0708 13:01:29.047821    3932 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0708 13:01:29.064617    3932 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0708 13:01:29.064626    3932 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0708 13:01:29.064631    3932 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0708 13:01:29.068527    3932 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 13:01:29.070305    3932 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0708 13:01:29.072562    3932 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 13:01:29.072738    3932 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0708 13:01:29.074142    3932 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0708 13:01:29.074415    3932 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0708 13:01:29.075618    3932 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0708 13:01:29.077060    3932 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0708 13:01:29.077092    3932 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0708 13:01:29.077162    3932 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0708 13:01:29.078308    3932 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0708 13:01:29.078312    3932 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0708 13:01:29.079439    3932 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0708 13:01:29.079510    3932 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0708 13:01:29.080493    3932 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0708 13:01:29.081358    3932 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0708 13:01:29.485073    3932 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0708 13:01:29.486690    3932 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0708 13:01:29.498999    3932 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0708 13:01:29.499024    3932 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0708 13:01:29.499114    3932 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0708 13:01:29.501378    3932 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0708 13:01:29.501398    3932 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0708 13:01:29.501433    3932 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0708 13:01:29.511315    3932 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0708 13:01:29.514267    3932 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0708 13:01:29.522088    3932 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0708 13:01:29.522182    3932 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0708 13:01:29.532677    3932 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0708 13:01:29.533508    3932 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0708 13:01:29.536096    3932 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0708 13:01:29.536115    3932 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0708 13:01:29.536154    3932 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0708 13:01:29.549624    3932 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0708 13:01:29.549646    3932 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0708 13:01:29.549713    3932 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0708 13:01:29.562468    3932 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0708 13:01:29.562476    3932 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0708 13:01:29.562488    3932 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0708 13:01:29.562488    3932 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0708 13:01:29.562519    3932 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0708 13:01:29.562544    3932 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0708 13:01:29.562544    3932 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0708 13:01:29.564990    3932 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0708 13:01:29.565109    3932 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	W0708 13:01:29.577724    3932 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0708 13:01:29.577851    3932 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0708 13:01:29.586289    3932 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0708 13:01:29.586347    3932 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0708 13:01:29.586363    3932 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0708 13:01:29.586422    3932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0708 13:01:29.586488    3932 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0708 13:01:29.597939    3932 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0708 13:01:29.597962    3932 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0708 13:01:29.598009    3932 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0708 13:01:29.598016    3932 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0708 13:01:29.598025    3932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0708 13:01:29.607592    3932 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0708 13:01:29.607606    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0708 13:01:29.630584    3932 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0708 13:01:29.630704    3932 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	W0708 13:01:29.634407    3932 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0708 13:01:29.634513    3932 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 13:01:29.673696    3932 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0708 13:01:29.673708    3932 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0708 13:01:29.673738    3932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0708 13:01:29.673757    3932 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0708 13:01:29.673777    3932 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 13:01:29.673823    3932 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 13:01:29.706002    3932 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0708 13:01:29.706135    3932 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0708 13:01:29.726913    3932 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0708 13:01:29.726943    3932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0708 13:01:29.777672    3932 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0708 13:01:29.777687    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0708 13:01:29.898772    3932 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0708 13:01:29.898795    3932 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0708 13:01:29.898801    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0708 13:01:30.311214    3932 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0708 13:01:30.311237    3932 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0708 13:01:30.311243    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0708 13:01:30.442316    3932 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0708 13:01:30.442356    3932 cache_images.go:92] duration metric: took 1.377758625s to LoadCachedImages
	W0708 13:01:30.442400    3932 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0708 13:01:30.442409    3932 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0708 13:01:30.442455    3932 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-129000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-129000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0708 13:01:30.442524    3932 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0708 13:01:30.464493    3932 cni.go:84] Creating CNI manager for ""
	I0708 13:01:30.464504    3932 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0708 13:01:30.464511    3932 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0708 13:01:30.464520    3932 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-129000 NodeName:running-upgrade-129000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0708 13:01:30.464595    3932 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-129000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0708 13:01:30.464655    3932 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0708 13:01:30.467920    3932 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 13:01:30.467952    3932 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0708 13:01:30.470552    3932 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0708 13:01:30.475469    3932 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 13:01:30.480445    3932 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0708 13:01:30.485535    3932 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0708 13:01:30.486971    3932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 13:01:30.576238    3932 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 13:01:30.580722    3932 certs.go:68] Setting up /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/running-upgrade-129000 for IP: 10.0.2.15
	I0708 13:01:30.580729    3932 certs.go:194] generating shared ca certs ...
	I0708 13:01:30.580737    3932 certs.go:226] acquiring lock for ca certs: {Name:mka13b605a6983b2618b91f3a0bdec43c132a4e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 13:01:30.580884    3932 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.key
	I0708 13:01:30.580922    3932 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/proxy-client-ca.key
	I0708 13:01:30.580926    3932 certs.go:256] generating profile certs ...
	I0708 13:01:30.580998    3932 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/running-upgrade-129000/client.key
	I0708 13:01:30.581015    3932 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/running-upgrade-129000/apiserver.key.fd765dad
	I0708 13:01:30.581023    3932 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/running-upgrade-129000/apiserver.crt.fd765dad with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0708 13:01:30.796100    3932 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/running-upgrade-129000/apiserver.crt.fd765dad ...
	I0708 13:01:30.796118    3932 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/running-upgrade-129000/apiserver.crt.fd765dad: {Name:mk8649a7b9b3c4201478b70172059e5f9f902f82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 13:01:30.796755    3932 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/running-upgrade-129000/apiserver.key.fd765dad ...
	I0708 13:01:30.796763    3932 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/running-upgrade-129000/apiserver.key.fd765dad: {Name:mkdd1be308a3506695845c0f332bf5fe08acb81e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 13:01:30.796961    3932 certs.go:381] copying /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/running-upgrade-129000/apiserver.crt.fd765dad -> /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/running-upgrade-129000/apiserver.crt
	I0708 13:01:30.797135    3932 certs.go:385] copying /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/running-upgrade-129000/apiserver.key.fd765dad -> /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/running-upgrade-129000/apiserver.key
	I0708 13:01:30.797318    3932 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/running-upgrade-129000/proxy-client.key
	I0708 13:01:30.797476    3932 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/1767.pem (1338 bytes)
	W0708 13:01:30.797500    3932 certs.go:480] ignoring /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/1767_empty.pem, impossibly tiny 0 bytes
	I0708 13:01:30.797507    3932 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca-key.pem (1679 bytes)
	I0708 13:01:30.797534    3932 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem (1078 bytes)
	I0708 13:01:30.797557    3932 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem (1123 bytes)
	I0708 13:01:30.797585    3932 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/key.pem (1675 bytes)
	I0708 13:01:30.797627    3932 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem (1708 bytes)
	I0708 13:01:30.797966    3932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 13:01:30.805673    3932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0708 13:01:30.812852    3932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 13:01:30.820407    3932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 13:01:30.827653    3932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/running-upgrade-129000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0708 13:01:30.834149    3932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/running-upgrade-129000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0708 13:01:30.841168    3932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/running-upgrade-129000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 13:01:30.848797    3932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/running-upgrade-129000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0708 13:01:30.856977    3932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem --> /usr/share/ca-certificates/17672.pem (1708 bytes)
	I0708 13:01:30.865305    3932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 13:01:30.873534    3932 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/1767.pem --> /usr/share/ca-certificates/1767.pem (1338 bytes)
	I0708 13:01:30.880970    3932 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 13:01:30.889821    3932 ssh_runner.go:195] Run: openssl version
	I0708 13:01:30.891733    3932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1767.pem && ln -fs /usr/share/ca-certificates/1767.pem /etc/ssl/certs/1767.pem"
	I0708 13:01:30.898079    3932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1767.pem
	I0708 13:01:30.901261    3932 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  8 19:34 /usr/share/ca-certificates/1767.pem
	I0708 13:01:30.901331    3932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1767.pem
	I0708 13:01:30.903664    3932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1767.pem /etc/ssl/certs/51391683.0"
	I0708 13:01:30.906817    3932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17672.pem && ln -fs /usr/share/ca-certificates/17672.pem /etc/ssl/certs/17672.pem"
	I0708 13:01:30.910493    3932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17672.pem
	I0708 13:01:30.912200    3932 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  8 19:34 /usr/share/ca-certificates/17672.pem
	I0708 13:01:30.912230    3932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17672.pem
	I0708 13:01:30.913943    3932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17672.pem /etc/ssl/certs/3ec20f2e.0"
	I0708 13:01:30.921107    3932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 13:01:30.926368    3932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 13:01:30.928441    3932 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 13:01:30.928481    3932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 13:01:30.930402    3932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 13:01:30.935166    3932 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 13:01:30.938237    3932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0708 13:01:30.940293    3932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0708 13:01:30.942167    3932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0708 13:01:30.943896    3932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0708 13:01:30.945681    3932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0708 13:01:30.947461    3932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0708 13:01:30.949237    3932 kubeadm.go:391] StartCluster: {Name:running-upgrade-129000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50391 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-129000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0708 13:01:30.949301    3932 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0708 13:01:30.982883    3932 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0708 13:01:30.990476    3932 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0708 13:01:30.990482    3932 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0708 13:01:30.990485    3932 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0708 13:01:30.990512    3932 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0708 13:01:30.993845    3932 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0708 13:01:30.994103    3932 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-129000" does not appear in /Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 13:01:30.994151    3932 kubeconfig.go:62] /Users/jenkins/minikube-integration/19195-1270/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-129000" cluster setting kubeconfig missing "running-upgrade-129000" context setting]
	I0708 13:01:30.994279    3932 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/kubeconfig: {Name:mkd06393ca6fb9ad91b614216d70dbd8a552e45d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 13:01:30.994725    3932 kapi.go:59] client config for running-upgrade-129000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/running-upgrade-129000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/running-upgrade-129000/client.key", CAFile:"/Users/jenkins/minikube-integration/19195-1270/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1043634f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0708 13:01:30.995056    3932 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0708 13:01:30.997930    3932 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-129000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0708 13:01:30.997935    3932 kubeadm.go:1154] stopping kube-system containers ...
	I0708 13:01:30.997985    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0708 13:01:31.010488    3932 docker.go:483] Stopping containers: [9c69d2f72ceb 27a315e0e1d2 0cc2bd949ba8 0e2cb05bc872 9c3007d53d5d b48635d6f41d 68f3d795741a b7d37f9d4b8f 534f472f7497 4295aa892888 6c5bcf734377 b498e6f3c980 549baf944be7 572a7b23b33d ab6316c47d83 f8c93c03a429 0cca2828c517 0c6f3330e29f d958102ef3f1]
	I0708 13:01:31.010568    3932 ssh_runner.go:195] Run: docker stop 9c69d2f72ceb 27a315e0e1d2 0cc2bd949ba8 0e2cb05bc872 9c3007d53d5d b48635d6f41d 68f3d795741a b7d37f9d4b8f 534f472f7497 4295aa892888 6c5bcf734377 b498e6f3c980 549baf944be7 572a7b23b33d ab6316c47d83 f8c93c03a429 0cca2828c517 0c6f3330e29f d958102ef3f1
	I0708 13:01:31.407454    3932 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0708 13:01:31.475528    3932 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 13:01:31.479075    3932 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5639 Jul  8 20:00 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Jul  8 20:00 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Jul  8 20:01 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Jul  8 20:00 /etc/kubernetes/scheduler.conf
	
	I0708 13:01:31.479116    3932 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50391 /etc/kubernetes/admin.conf
	I0708 13:01:31.482542    3932 kubeadm.go:162] "https://control-plane.minikube.internal:50391" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50391 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0708 13:01:31.482574    3932 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 13:01:31.485478    3932 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50391 /etc/kubernetes/kubelet.conf
	I0708 13:01:31.488343    3932 kubeadm.go:162] "https://control-plane.minikube.internal:50391" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50391 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0708 13:01:31.488370    3932 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 13:01:31.491076    3932 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50391 /etc/kubernetes/controller-manager.conf
	I0708 13:01:31.493669    3932 kubeadm.go:162] "https://control-plane.minikube.internal:50391" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50391 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0708 13:01:31.493688    3932 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 13:01:31.496454    3932 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50391 /etc/kubernetes/scheduler.conf
	I0708 13:01:31.499286    3932 kubeadm.go:162] "https://control-plane.minikube.internal:50391" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50391 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0708 13:01:31.499305    3932 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 13:01:31.501942    3932 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 13:01:31.505376    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 13:01:31.528694    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 13:01:31.956005    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0708 13:01:32.154930    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 13:01:32.177927    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0708 13:01:32.198470    3932 api_server.go:52] waiting for apiserver process to appear ...
	I0708 13:01:32.198544    3932 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 13:01:32.701179    3932 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 13:01:33.200853    3932 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 13:01:33.700590    3932 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 13:01:33.704911    3932 api_server.go:72] duration metric: took 1.506483s to wait for apiserver process to appear ...
	I0708 13:01:33.704921    3932 api_server.go:88] waiting for apiserver healthz status ...
	I0708 13:01:33.704938    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:01:38.707017    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:01:38.707065    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:01:43.707424    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:01:43.707504    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:01:48.708209    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:01:48.708229    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:01:53.708868    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:01:53.708955    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:01:58.710252    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:01:58.710323    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:02:03.712213    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:02:03.712298    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:02:08.714167    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:02:08.714197    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:02:13.716382    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:02:13.716450    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:02:18.718782    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:02:18.718804    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:02:23.720241    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:02:23.720325    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:02:28.722884    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:02:28.722950    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:02:33.723540    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:02:33.724001    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:02:33.759421    3932 logs.go:276] 2 containers: [b73a0038804f 27a315e0e1d2]
	I0708 13:02:33.759561    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:02:33.780110    3932 logs.go:276] 2 containers: [995ff223681d 663e148eab2d]
	I0708 13:02:33.780214    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:02:33.794834    3932 logs.go:276] 1 containers: [632152eccf25]
	I0708 13:02:33.794907    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:02:33.807023    3932 logs.go:276] 2 containers: [caa2559e6578 572a7b23b33d]
	I0708 13:02:33.807113    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:02:33.817805    3932 logs.go:276] 1 containers: [7fc889e2cef6]
	I0708 13:02:33.817882    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:02:33.828490    3932 logs.go:276] 2 containers: [364e7abdea37 ab6316c47d83]
	I0708 13:02:33.828561    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:02:33.838589    3932 logs.go:276] 0 containers: []
	W0708 13:02:33.838601    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:02:33.838652    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:02:33.849399    3932 logs.go:276] 2 containers: [aed1a674fd24 374ea76eccc3]
	I0708 13:02:33.849418    3932 logs.go:123] Gathering logs for kube-controller-manager [364e7abdea37] ...
	I0708 13:02:33.849424    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 364e7abdea37"
	I0708 13:02:33.867595    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:02:33.867605    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:02:33.936069    3932 logs.go:123] Gathering logs for kube-scheduler [caa2559e6578] ...
	I0708 13:02:33.936079    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa2559e6578"
	I0708 13:02:33.948076    3932 logs.go:123] Gathering logs for kube-controller-manager [ab6316c47d83] ...
	I0708 13:02:33.948089    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab6316c47d83"
	I0708 13:02:33.962360    3932 logs.go:123] Gathering logs for storage-provisioner [aed1a674fd24] ...
	I0708 13:02:33.962373    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed1a674fd24"
	I0708 13:02:33.973900    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:02:33.973911    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:02:33.998926    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:02:33.998935    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:02:34.010325    3932 logs.go:123] Gathering logs for kube-apiserver [27a315e0e1d2] ...
	I0708 13:02:34.010337    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a315e0e1d2"
	I0708 13:02:34.023587    3932 logs.go:123] Gathering logs for etcd [663e148eab2d] ...
	I0708 13:02:34.023600    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 663e148eab2d"
	I0708 13:02:34.034552    3932 logs.go:123] Gathering logs for coredns [632152eccf25] ...
	I0708 13:02:34.034563    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 632152eccf25"
	I0708 13:02:34.045986    3932 logs.go:123] Gathering logs for kube-proxy [7fc889e2cef6] ...
	I0708 13:02:34.045999    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc889e2cef6"
	I0708 13:02:34.057699    3932 logs.go:123] Gathering logs for storage-provisioner [374ea76eccc3] ...
	I0708 13:02:34.057709    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374ea76eccc3"
	I0708 13:02:34.068829    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:02:34.068840    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:02:34.110821    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:02:34.110829    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:02:34.114948    3932 logs.go:123] Gathering logs for kube-apiserver [b73a0038804f] ...
	I0708 13:02:34.114956    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b73a0038804f"
	I0708 13:02:34.128717    3932 logs.go:123] Gathering logs for etcd [995ff223681d] ...
	I0708 13:02:34.128728    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995ff223681d"
	I0708 13:02:34.142590    3932 logs.go:123] Gathering logs for kube-scheduler [572a7b23b33d] ...
	I0708 13:02:34.142600    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 572a7b23b33d"
	I0708 13:02:36.660413    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:02:41.663098    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:02:41.663487    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:02:41.698398    3932 logs.go:276] 2 containers: [b73a0038804f 27a315e0e1d2]
	I0708 13:02:41.698538    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:02:41.720048    3932 logs.go:276] 2 containers: [995ff223681d 663e148eab2d]
	I0708 13:02:41.720166    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:02:41.735466    3932 logs.go:276] 1 containers: [632152eccf25]
	I0708 13:02:41.735544    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:02:41.747712    3932 logs.go:276] 2 containers: [caa2559e6578 572a7b23b33d]
	I0708 13:02:41.747782    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:02:41.758364    3932 logs.go:276] 1 containers: [7fc889e2cef6]
	I0708 13:02:41.758422    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:02:41.768514    3932 logs.go:276] 2 containers: [364e7abdea37 ab6316c47d83]
	I0708 13:02:41.768572    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:02:41.778754    3932 logs.go:276] 0 containers: []
	W0708 13:02:41.778765    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:02:41.778818    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:02:41.789117    3932 logs.go:276] 2 containers: [aed1a674fd24 374ea76eccc3]
	I0708 13:02:41.789135    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:02:41.789140    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:02:41.829195    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:02:41.829203    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:02:41.864319    3932 logs.go:123] Gathering logs for etcd [995ff223681d] ...
	I0708 13:02:41.864333    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995ff223681d"
	I0708 13:02:41.878164    3932 logs.go:123] Gathering logs for coredns [632152eccf25] ...
	I0708 13:02:41.878176    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 632152eccf25"
	I0708 13:02:41.889718    3932 logs.go:123] Gathering logs for kube-controller-manager [ab6316c47d83] ...
	I0708 13:02:41.889730    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab6316c47d83"
	I0708 13:02:41.903460    3932 logs.go:123] Gathering logs for kube-controller-manager [364e7abdea37] ...
	I0708 13:02:41.903473    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 364e7abdea37"
	I0708 13:02:41.920699    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:02:41.920708    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:02:41.946262    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:02:41.946273    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:02:41.957980    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:02:41.957990    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:02:41.962383    3932 logs.go:123] Gathering logs for kube-apiserver [b73a0038804f] ...
	I0708 13:02:41.962389    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b73a0038804f"
	I0708 13:02:41.979203    3932 logs.go:123] Gathering logs for etcd [663e148eab2d] ...
	I0708 13:02:41.979213    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 663e148eab2d"
	I0708 13:02:41.990222    3932 logs.go:123] Gathering logs for kube-scheduler [572a7b23b33d] ...
	I0708 13:02:41.990235    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 572a7b23b33d"
	I0708 13:02:42.005444    3932 logs.go:123] Gathering logs for storage-provisioner [aed1a674fd24] ...
	I0708 13:02:42.005453    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed1a674fd24"
	I0708 13:02:42.023215    3932 logs.go:123] Gathering logs for storage-provisioner [374ea76eccc3] ...
	I0708 13:02:42.023224    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374ea76eccc3"
	I0708 13:02:42.034884    3932 logs.go:123] Gathering logs for kube-apiserver [27a315e0e1d2] ...
	I0708 13:02:42.034894    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a315e0e1d2"
	I0708 13:02:42.046513    3932 logs.go:123] Gathering logs for kube-scheduler [caa2559e6578] ...
	I0708 13:02:42.046524    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa2559e6578"
	I0708 13:02:42.058384    3932 logs.go:123] Gathering logs for kube-proxy [7fc889e2cef6] ...
	I0708 13:02:42.058394    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc889e2cef6"
	I0708 13:02:44.571040    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:02:49.573661    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:02:49.574151    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:02:49.613606    3932 logs.go:276] 2 containers: [b73a0038804f 27a315e0e1d2]
	I0708 13:02:49.613749    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:02:49.635989    3932 logs.go:276] 2 containers: [995ff223681d 663e148eab2d]
	I0708 13:02:49.636088    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:02:49.651004    3932 logs.go:276] 1 containers: [632152eccf25]
	I0708 13:02:49.651072    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:02:49.663347    3932 logs.go:276] 2 containers: [caa2559e6578 572a7b23b33d]
	I0708 13:02:49.663422    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:02:49.678973    3932 logs.go:276] 1 containers: [7fc889e2cef6]
	I0708 13:02:49.679036    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:02:49.698686    3932 logs.go:276] 2 containers: [364e7abdea37 ab6316c47d83]
	I0708 13:02:49.698744    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:02:49.709168    3932 logs.go:276] 0 containers: []
	W0708 13:02:49.709181    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:02:49.709232    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:02:49.720138    3932 logs.go:276] 2 containers: [aed1a674fd24 374ea76eccc3]
	I0708 13:02:49.720156    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:02:49.720162    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:02:49.724897    3932 logs.go:123] Gathering logs for kube-apiserver [b73a0038804f] ...
	I0708 13:02:49.724903    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b73a0038804f"
	I0708 13:02:49.739234    3932 logs.go:123] Gathering logs for etcd [663e148eab2d] ...
	I0708 13:02:49.739246    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 663e148eab2d"
	I0708 13:02:49.750472    3932 logs.go:123] Gathering logs for kube-controller-manager [364e7abdea37] ...
	I0708 13:02:49.750484    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 364e7abdea37"
	I0708 13:02:49.767899    3932 logs.go:123] Gathering logs for storage-provisioner [374ea76eccc3] ...
	I0708 13:02:49.767910    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374ea76eccc3"
	I0708 13:02:49.779516    3932 logs.go:123] Gathering logs for etcd [995ff223681d] ...
	I0708 13:02:49.779525    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995ff223681d"
	I0708 13:02:49.793458    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:02:49.793470    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:02:49.805529    3932 logs.go:123] Gathering logs for kube-scheduler [caa2559e6578] ...
	I0708 13:02:49.805542    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa2559e6578"
	I0708 13:02:49.817990    3932 logs.go:123] Gathering logs for kube-scheduler [572a7b23b33d] ...
	I0708 13:02:49.818001    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 572a7b23b33d"
	I0708 13:02:49.833021    3932 logs.go:123] Gathering logs for kube-proxy [7fc889e2cef6] ...
	I0708 13:02:49.833029    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc889e2cef6"
	I0708 13:02:49.851690    3932 logs.go:123] Gathering logs for kube-controller-manager [ab6316c47d83] ...
	I0708 13:02:49.851700    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab6316c47d83"
	I0708 13:02:49.865591    3932 logs.go:123] Gathering logs for storage-provisioner [aed1a674fd24] ...
	I0708 13:02:49.865601    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed1a674fd24"
	I0708 13:02:49.877100    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:02:49.877113    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:02:49.918552    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:02:49.918561    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:02:49.953031    3932 logs.go:123] Gathering logs for kube-apiserver [27a315e0e1d2] ...
	I0708 13:02:49.953045    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a315e0e1d2"
	I0708 13:02:49.964890    3932 logs.go:123] Gathering logs for coredns [632152eccf25] ...
	I0708 13:02:49.964900    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 632152eccf25"
	I0708 13:02:49.976775    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:02:49.976789    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:02:52.505511    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:02:57.507759    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:02:57.508005    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:02:57.531036    3932 logs.go:276] 2 containers: [b73a0038804f 27a315e0e1d2]
	I0708 13:02:57.531131    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:02:57.546545    3932 logs.go:276] 2 containers: [995ff223681d 663e148eab2d]
	I0708 13:02:57.546640    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:02:57.562404    3932 logs.go:276] 1 containers: [632152eccf25]
	I0708 13:02:57.562472    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:02:57.573395    3932 logs.go:276] 2 containers: [caa2559e6578 572a7b23b33d]
	I0708 13:02:57.573470    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:02:57.583900    3932 logs.go:276] 1 containers: [7fc889e2cef6]
	I0708 13:02:57.583969    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:02:57.594151    3932 logs.go:276] 2 containers: [364e7abdea37 ab6316c47d83]
	I0708 13:02:57.594215    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:02:57.604386    3932 logs.go:276] 0 containers: []
	W0708 13:02:57.604398    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:02:57.604457    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:02:57.615892    3932 logs.go:276] 2 containers: [aed1a674fd24 374ea76eccc3]
	I0708 13:02:57.615907    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:02:57.615914    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:02:57.620677    3932 logs.go:123] Gathering logs for kube-proxy [7fc889e2cef6] ...
	I0708 13:02:57.620687    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc889e2cef6"
	I0708 13:02:57.632238    3932 logs.go:123] Gathering logs for storage-provisioner [374ea76eccc3] ...
	I0708 13:02:57.632251    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374ea76eccc3"
	I0708 13:02:57.643781    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:02:57.643791    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:02:57.670902    3932 logs.go:123] Gathering logs for etcd [995ff223681d] ...
	I0708 13:02:57.670910    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995ff223681d"
	I0708 13:02:57.685188    3932 logs.go:123] Gathering logs for kube-scheduler [572a7b23b33d] ...
	I0708 13:02:57.685199    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 572a7b23b33d"
	I0708 13:02:57.700429    3932 logs.go:123] Gathering logs for kube-controller-manager [ab6316c47d83] ...
	I0708 13:02:57.700441    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab6316c47d83"
	I0708 13:02:57.714969    3932 logs.go:123] Gathering logs for kube-controller-manager [364e7abdea37] ...
	I0708 13:02:57.714981    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 364e7abdea37"
	I0708 13:02:57.736019    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:02:57.736029    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:02:57.778270    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:02:57.778279    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:02:57.813649    3932 logs.go:123] Gathering logs for kube-apiserver [27a315e0e1d2] ...
	I0708 13:02:57.813663    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a315e0e1d2"
	I0708 13:02:57.825727    3932 logs.go:123] Gathering logs for coredns [632152eccf25] ...
	I0708 13:02:57.825741    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 632152eccf25"
	I0708 13:02:57.837324    3932 logs.go:123] Gathering logs for kube-scheduler [caa2559e6578] ...
	I0708 13:02:57.837337    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa2559e6578"
	I0708 13:02:57.848742    3932 logs.go:123] Gathering logs for kube-apiserver [b73a0038804f] ...
	I0708 13:02:57.848754    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b73a0038804f"
	I0708 13:02:57.862308    3932 logs.go:123] Gathering logs for etcd [663e148eab2d] ...
	I0708 13:02:57.862322    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 663e148eab2d"
	I0708 13:02:57.872985    3932 logs.go:123] Gathering logs for storage-provisioner [aed1a674fd24] ...
	I0708 13:02:57.872996    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed1a674fd24"
	I0708 13:02:57.884152    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:02:57.884164    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:03:00.398139    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:03:05.400257    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:03:05.400579    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:03:05.430200    3932 logs.go:276] 2 containers: [b73a0038804f 27a315e0e1d2]
	I0708 13:03:05.430329    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:03:05.448252    3932 logs.go:276] 2 containers: [995ff223681d 663e148eab2d]
	I0708 13:03:05.448346    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:03:05.463982    3932 logs.go:276] 1 containers: [632152eccf25]
	I0708 13:03:05.464065    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:03:05.475878    3932 logs.go:276] 2 containers: [caa2559e6578 572a7b23b33d]
	I0708 13:03:05.475959    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:03:05.486509    3932 logs.go:276] 1 containers: [7fc889e2cef6]
	I0708 13:03:05.486582    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:03:05.496998    3932 logs.go:276] 2 containers: [364e7abdea37 ab6316c47d83]
	I0708 13:03:05.497070    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:03:05.507442    3932 logs.go:276] 0 containers: []
	W0708 13:03:05.507454    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:03:05.507514    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:03:05.521421    3932 logs.go:276] 2 containers: [aed1a674fd24 374ea76eccc3]
	I0708 13:03:05.521438    3932 logs.go:123] Gathering logs for etcd [995ff223681d] ...
	I0708 13:03:05.521445    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995ff223681d"
	I0708 13:03:05.535413    3932 logs.go:123] Gathering logs for kube-scheduler [572a7b23b33d] ...
	I0708 13:03:05.535423    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 572a7b23b33d"
	I0708 13:03:05.550115    3932 logs.go:123] Gathering logs for kube-proxy [7fc889e2cef6] ...
	I0708 13:03:05.550126    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc889e2cef6"
	I0708 13:03:05.561999    3932 logs.go:123] Gathering logs for kube-controller-manager [364e7abdea37] ...
	I0708 13:03:05.562011    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 364e7abdea37"
	I0708 13:03:05.579284    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:03:05.579295    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:03:05.612616    3932 logs.go:123] Gathering logs for kube-apiserver [27a315e0e1d2] ...
	I0708 13:03:05.612628    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a315e0e1d2"
	I0708 13:03:05.624812    3932 logs.go:123] Gathering logs for etcd [663e148eab2d] ...
	I0708 13:03:05.624822    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 663e148eab2d"
	I0708 13:03:05.639742    3932 logs.go:123] Gathering logs for coredns [632152eccf25] ...
	I0708 13:03:05.639753    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 632152eccf25"
	I0708 13:03:05.651329    3932 logs.go:123] Gathering logs for storage-provisioner [374ea76eccc3] ...
	I0708 13:03:05.651339    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374ea76eccc3"
	I0708 13:03:05.662399    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:03:05.662412    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:03:05.688501    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:03:05.688507    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:03:05.692641    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:03:05.692646    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:03:05.732361    3932 logs.go:123] Gathering logs for kube-apiserver [b73a0038804f] ...
	I0708 13:03:05.732368    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b73a0038804f"
	I0708 13:03:05.746381    3932 logs.go:123] Gathering logs for kube-scheduler [caa2559e6578] ...
	I0708 13:03:05.746389    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa2559e6578"
	I0708 13:03:05.763499    3932 logs.go:123] Gathering logs for kube-controller-manager [ab6316c47d83] ...
	I0708 13:03:05.763512    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab6316c47d83"
	I0708 13:03:05.777397    3932 logs.go:123] Gathering logs for storage-provisioner [aed1a674fd24] ...
	I0708 13:03:05.777407    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed1a674fd24"
	I0708 13:03:05.788923    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:03:05.788935    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:03:08.302474    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:03:13.304798    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:03:13.305286    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:03:13.348380    3932 logs.go:276] 2 containers: [b73a0038804f 27a315e0e1d2]
	I0708 13:03:13.348509    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:03:13.368121    3932 logs.go:276] 2 containers: [995ff223681d 663e148eab2d]
	I0708 13:03:13.368215    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:03:13.382545    3932 logs.go:276] 1 containers: [632152eccf25]
	I0708 13:03:13.382621    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:03:13.394353    3932 logs.go:276] 2 containers: [caa2559e6578 572a7b23b33d]
	I0708 13:03:13.394409    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:03:13.405556    3932 logs.go:276] 1 containers: [7fc889e2cef6]
	I0708 13:03:13.405626    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:03:13.418664    3932 logs.go:276] 2 containers: [364e7abdea37 ab6316c47d83]
	I0708 13:03:13.418721    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:03:13.439609    3932 logs.go:276] 0 containers: []
	W0708 13:03:13.439623    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:03:13.439676    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:03:13.450547    3932 logs.go:276] 2 containers: [aed1a674fd24 374ea76eccc3]
	I0708 13:03:13.450565    3932 logs.go:123] Gathering logs for kube-controller-manager [364e7abdea37] ...
	I0708 13:03:13.450570    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 364e7abdea37"
	I0708 13:03:13.467629    3932 logs.go:123] Gathering logs for etcd [663e148eab2d] ...
	I0708 13:03:13.467638    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 663e148eab2d"
	I0708 13:03:13.479373    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:03:13.479383    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:03:13.515030    3932 logs.go:123] Gathering logs for storage-provisioner [aed1a674fd24] ...
	I0708 13:03:13.515039    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed1a674fd24"
	I0708 13:03:13.527810    3932 logs.go:123] Gathering logs for storage-provisioner [374ea76eccc3] ...
	I0708 13:03:13.527820    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374ea76eccc3"
	I0708 13:03:13.539493    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:03:13.539504    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:03:13.581303    3932 logs.go:123] Gathering logs for kube-apiserver [27a315e0e1d2] ...
	I0708 13:03:13.581311    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a315e0e1d2"
	I0708 13:03:13.592613    3932 logs.go:123] Gathering logs for coredns [632152eccf25] ...
	I0708 13:03:13.592624    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 632152eccf25"
	I0708 13:03:13.604377    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:03:13.604387    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:03:13.616012    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:03:13.616022    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:03:13.620889    3932 logs.go:123] Gathering logs for etcd [995ff223681d] ...
	I0708 13:03:13.620898    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995ff223681d"
	I0708 13:03:13.634966    3932 logs.go:123] Gathering logs for kube-scheduler [caa2559e6578] ...
	I0708 13:03:13.634976    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa2559e6578"
	I0708 13:03:13.646339    3932 logs.go:123] Gathering logs for kube-scheduler [572a7b23b33d] ...
	I0708 13:03:13.646352    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 572a7b23b33d"
	I0708 13:03:13.661002    3932 logs.go:123] Gathering logs for kube-proxy [7fc889e2cef6] ...
	I0708 13:03:13.661012    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc889e2cef6"
	I0708 13:03:13.672838    3932 logs.go:123] Gathering logs for kube-controller-manager [ab6316c47d83] ...
	I0708 13:03:13.672853    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab6316c47d83"
	I0708 13:03:13.687173    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:03:13.687183    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:03:13.713422    3932 logs.go:123] Gathering logs for kube-apiserver [b73a0038804f] ...
	I0708 13:03:13.713429    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b73a0038804f"
	I0708 13:03:16.228876    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:03:21.231424    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:03:21.231595    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:03:21.254240    3932 logs.go:276] 2 containers: [b73a0038804f 27a315e0e1d2]
	I0708 13:03:21.254340    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:03:21.270135    3932 logs.go:276] 2 containers: [995ff223681d 663e148eab2d]
	I0708 13:03:21.270208    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:03:21.283902    3932 logs.go:276] 1 containers: [632152eccf25]
	I0708 13:03:21.283967    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:03:21.294659    3932 logs.go:276] 2 containers: [caa2559e6578 572a7b23b33d]
	I0708 13:03:21.294723    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:03:21.310868    3932 logs.go:276] 1 containers: [7fc889e2cef6]
	I0708 13:03:21.310935    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:03:21.321502    3932 logs.go:276] 2 containers: [364e7abdea37 ab6316c47d83]
	I0708 13:03:21.321565    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:03:21.332162    3932 logs.go:276] 0 containers: []
	W0708 13:03:21.332172    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:03:21.332226    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:03:21.342206    3932 logs.go:276] 2 containers: [aed1a674fd24 374ea76eccc3]
	I0708 13:03:21.342223    3932 logs.go:123] Gathering logs for storage-provisioner [aed1a674fd24] ...
	I0708 13:03:21.342229    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed1a674fd24"
	I0708 13:03:21.353444    3932 logs.go:123] Gathering logs for kube-apiserver [27a315e0e1d2] ...
	I0708 13:03:21.353455    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a315e0e1d2"
	I0708 13:03:21.364979    3932 logs.go:123] Gathering logs for etcd [663e148eab2d] ...
	I0708 13:03:21.364990    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 663e148eab2d"
	I0708 13:03:21.375953    3932 logs.go:123] Gathering logs for kube-scheduler [caa2559e6578] ...
	I0708 13:03:21.375967    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa2559e6578"
	I0708 13:03:21.387440    3932 logs.go:123] Gathering logs for kube-scheduler [572a7b23b33d] ...
	I0708 13:03:21.387451    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 572a7b23b33d"
	I0708 13:03:21.402389    3932 logs.go:123] Gathering logs for kube-controller-manager [ab6316c47d83] ...
	I0708 13:03:21.402401    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab6316c47d83"
	I0708 13:03:21.420312    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:03:21.420325    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:03:21.425081    3932 logs.go:123] Gathering logs for storage-provisioner [374ea76eccc3] ...
	I0708 13:03:21.425092    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374ea76eccc3"
	I0708 13:03:21.439917    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:03:21.439927    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:03:21.451685    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:03:21.451698    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:03:21.485930    3932 logs.go:123] Gathering logs for etcd [995ff223681d] ...
	I0708 13:03:21.485942    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995ff223681d"
	I0708 13:03:21.503614    3932 logs.go:123] Gathering logs for kube-proxy [7fc889e2cef6] ...
	I0708 13:03:21.503626    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc889e2cef6"
	I0708 13:03:21.515012    3932 logs.go:123] Gathering logs for kube-controller-manager [364e7abdea37] ...
	I0708 13:03:21.515026    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 364e7abdea37"
	I0708 13:03:21.532879    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:03:21.532888    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:03:21.557205    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:03:21.557213    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:03:21.596903    3932 logs.go:123] Gathering logs for kube-apiserver [b73a0038804f] ...
	I0708 13:03:21.596911    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b73a0038804f"
	I0708 13:03:21.613997    3932 logs.go:123] Gathering logs for coredns [632152eccf25] ...
	I0708 13:03:21.614009    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 632152eccf25"
	I0708 13:03:24.127583    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:03:29.130237    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:03:29.130650    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:03:29.170622    3932 logs.go:276] 2 containers: [b73a0038804f 27a315e0e1d2]
	I0708 13:03:29.170766    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:03:29.192911    3932 logs.go:276] 2 containers: [995ff223681d 663e148eab2d]
	I0708 13:03:29.193017    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:03:29.208272    3932 logs.go:276] 1 containers: [632152eccf25]
	I0708 13:03:29.208348    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:03:29.221007    3932 logs.go:276] 2 containers: [caa2559e6578 572a7b23b33d]
	I0708 13:03:29.221075    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:03:29.234027    3932 logs.go:276] 1 containers: [7fc889e2cef6]
	I0708 13:03:29.234102    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:03:29.246888    3932 logs.go:276] 2 containers: [364e7abdea37 ab6316c47d83]
	I0708 13:03:29.246954    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:03:29.257084    3932 logs.go:276] 0 containers: []
	W0708 13:03:29.257096    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:03:29.257152    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:03:29.267339    3932 logs.go:276] 2 containers: [aed1a674fd24 374ea76eccc3]
	I0708 13:03:29.267356    3932 logs.go:123] Gathering logs for kube-scheduler [572a7b23b33d] ...
	I0708 13:03:29.267361    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 572a7b23b33d"
	I0708 13:03:29.282555    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:03:29.282565    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:03:29.307357    3932 logs.go:123] Gathering logs for kube-apiserver [27a315e0e1d2] ...
	I0708 13:03:29.307368    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a315e0e1d2"
	I0708 13:03:29.319714    3932 logs.go:123] Gathering logs for etcd [663e148eab2d] ...
	I0708 13:03:29.319727    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 663e148eab2d"
	I0708 13:03:29.333621    3932 logs.go:123] Gathering logs for etcd [995ff223681d] ...
	I0708 13:03:29.333633    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995ff223681d"
	I0708 13:03:29.347595    3932 logs.go:123] Gathering logs for storage-provisioner [374ea76eccc3] ...
	I0708 13:03:29.347607    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374ea76eccc3"
	I0708 13:03:29.358657    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:03:29.358669    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:03:29.370239    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:03:29.370253    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:03:29.405063    3932 logs.go:123] Gathering logs for kube-apiserver [b73a0038804f] ...
	I0708 13:03:29.405076    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b73a0038804f"
	I0708 13:03:29.419820    3932 logs.go:123] Gathering logs for kube-controller-manager [364e7abdea37] ...
	I0708 13:03:29.419830    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 364e7abdea37"
	I0708 13:03:29.436935    3932 logs.go:123] Gathering logs for storage-provisioner [aed1a674fd24] ...
	I0708 13:03:29.436945    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed1a674fd24"
	I0708 13:03:29.448418    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:03:29.448432    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:03:29.453216    3932 logs.go:123] Gathering logs for kube-proxy [7fc889e2cef6] ...
	I0708 13:03:29.453223    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc889e2cef6"
	I0708 13:03:29.472825    3932 logs.go:123] Gathering logs for kube-scheduler [caa2559e6578] ...
	I0708 13:03:29.472838    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa2559e6578"
	I0708 13:03:29.484720    3932 logs.go:123] Gathering logs for kube-controller-manager [ab6316c47d83] ...
	I0708 13:03:29.484733    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab6316c47d83"
	I0708 13:03:29.498643    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:03:29.498656    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:03:29.540766    3932 logs.go:123] Gathering logs for coredns [632152eccf25] ...
	I0708 13:03:29.540776    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 632152eccf25"
	I0708 13:03:32.054275    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:03:37.055934    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:03:37.056016    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:03:37.073385    3932 logs.go:276] 2 containers: [b73a0038804f 27a315e0e1d2]
	I0708 13:03:37.073440    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:03:37.084972    3932 logs.go:276] 2 containers: [995ff223681d 663e148eab2d]
	I0708 13:03:37.085032    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:03:37.096443    3932 logs.go:276] 1 containers: [632152eccf25]
	I0708 13:03:37.096496    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:03:37.109060    3932 logs.go:276] 2 containers: [caa2559e6578 572a7b23b33d]
	I0708 13:03:37.109115    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:03:37.120283    3932 logs.go:276] 1 containers: [7fc889e2cef6]
	I0708 13:03:37.120356    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:03:37.131844    3932 logs.go:276] 2 containers: [364e7abdea37 ab6316c47d83]
	I0708 13:03:37.131904    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:03:37.142242    3932 logs.go:276] 0 containers: []
	W0708 13:03:37.142257    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:03:37.142305    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:03:37.157372    3932 logs.go:276] 2 containers: [aed1a674fd24 374ea76eccc3]
	I0708 13:03:37.157393    3932 logs.go:123] Gathering logs for kube-proxy [7fc889e2cef6] ...
	I0708 13:03:37.157398    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc889e2cef6"
	I0708 13:03:37.168855    3932 logs.go:123] Gathering logs for storage-provisioner [aed1a674fd24] ...
	I0708 13:03:37.168866    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed1a674fd24"
	I0708 13:03:37.180811    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:03:37.180824    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:03:37.206790    3932 logs.go:123] Gathering logs for etcd [995ff223681d] ...
	I0708 13:03:37.206799    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995ff223681d"
	I0708 13:03:37.220229    3932 logs.go:123] Gathering logs for kube-scheduler [572a7b23b33d] ...
	I0708 13:03:37.220240    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 572a7b23b33d"
	I0708 13:03:37.238155    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:03:37.238167    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:03:37.274552    3932 logs.go:123] Gathering logs for kube-apiserver [27a315e0e1d2] ...
	I0708 13:03:37.274562    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a315e0e1d2"
	I0708 13:03:37.286684    3932 logs.go:123] Gathering logs for etcd [663e148eab2d] ...
	I0708 13:03:37.286697    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 663e148eab2d"
	I0708 13:03:37.297905    3932 logs.go:123] Gathering logs for kube-controller-manager [364e7abdea37] ...
	I0708 13:03:37.297919    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 364e7abdea37"
	I0708 13:03:37.315308    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:03:37.315317    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:03:37.357474    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:03:37.357482    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:03:37.361791    3932 logs.go:123] Gathering logs for kube-controller-manager [ab6316c47d83] ...
	I0708 13:03:37.361798    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab6316c47d83"
	I0708 13:03:37.376429    3932 logs.go:123] Gathering logs for kube-apiserver [b73a0038804f] ...
	I0708 13:03:37.376439    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b73a0038804f"
	I0708 13:03:37.393387    3932 logs.go:123] Gathering logs for kube-scheduler [caa2559e6578] ...
	I0708 13:03:37.393396    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa2559e6578"
	I0708 13:03:37.406582    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:03:37.406592    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:03:37.418998    3932 logs.go:123] Gathering logs for coredns [632152eccf25] ...
	I0708 13:03:37.419008    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 632152eccf25"
	I0708 13:03:37.430810    3932 logs.go:123] Gathering logs for storage-provisioner [374ea76eccc3] ...
	I0708 13:03:37.430820    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374ea76eccc3"
	I0708 13:03:39.944372    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:03:44.947081    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:03:44.947255    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:03:44.959418    3932 logs.go:276] 2 containers: [b73a0038804f 27a315e0e1d2]
	I0708 13:03:44.959498    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:03:44.970721    3932 logs.go:276] 2 containers: [995ff223681d 663e148eab2d]
	I0708 13:03:44.970792    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:03:44.985713    3932 logs.go:276] 1 containers: [632152eccf25]
	I0708 13:03:44.985778    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:03:44.996680    3932 logs.go:276] 2 containers: [caa2559e6578 572a7b23b33d]
	I0708 13:03:44.996740    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:03:45.007328    3932 logs.go:276] 1 containers: [7fc889e2cef6]
	I0708 13:03:45.007394    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:03:45.018347    3932 logs.go:276] 2 containers: [364e7abdea37 ab6316c47d83]
	I0708 13:03:45.018407    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:03:45.028983    3932 logs.go:276] 0 containers: []
	W0708 13:03:45.028994    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:03:45.029049    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:03:45.039497    3932 logs.go:276] 2 containers: [aed1a674fd24 374ea76eccc3]
	I0708 13:03:45.039516    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:03:45.039522    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:03:45.082841    3932 logs.go:123] Gathering logs for etcd [663e148eab2d] ...
	I0708 13:03:45.082849    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 663e148eab2d"
	I0708 13:03:45.094520    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:03:45.094534    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:03:45.107188    3932 logs.go:123] Gathering logs for kube-apiserver [27a315e0e1d2] ...
	I0708 13:03:45.107200    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a315e0e1d2"
	I0708 13:03:45.119233    3932 logs.go:123] Gathering logs for coredns [632152eccf25] ...
	I0708 13:03:45.119244    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 632152eccf25"
	I0708 13:03:45.130830    3932 logs.go:123] Gathering logs for storage-provisioner [aed1a674fd24] ...
	I0708 13:03:45.130843    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed1a674fd24"
	I0708 13:03:45.142235    3932 logs.go:123] Gathering logs for kube-proxy [7fc889e2cef6] ...
	I0708 13:03:45.142246    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc889e2cef6"
	I0708 13:03:45.154273    3932 logs.go:123] Gathering logs for kube-controller-manager [364e7abdea37] ...
	I0708 13:03:45.154284    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 364e7abdea37"
	I0708 13:03:45.171783    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:03:45.171792    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:03:45.176870    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:03:45.176878    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:03:45.212610    3932 logs.go:123] Gathering logs for kube-scheduler [caa2559e6578] ...
	I0708 13:03:45.212622    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa2559e6578"
	I0708 13:03:45.224634    3932 logs.go:123] Gathering logs for kube-controller-manager [ab6316c47d83] ...
	I0708 13:03:45.224645    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab6316c47d83"
	I0708 13:03:45.239599    3932 logs.go:123] Gathering logs for storage-provisioner [374ea76eccc3] ...
	I0708 13:03:45.239608    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374ea76eccc3"
	I0708 13:03:45.251116    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:03:45.251126    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:03:45.277424    3932 logs.go:123] Gathering logs for kube-apiserver [b73a0038804f] ...
	I0708 13:03:45.277431    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b73a0038804f"
	I0708 13:03:45.291708    3932 logs.go:123] Gathering logs for etcd [995ff223681d] ...
	I0708 13:03:45.291719    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995ff223681d"
	I0708 13:03:45.305648    3932 logs.go:123] Gathering logs for kube-scheduler [572a7b23b33d] ...
	I0708 13:03:45.305657    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 572a7b23b33d"
	I0708 13:03:47.822717    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:03:52.824376    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:03:52.824565    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:03:52.841736    3932 logs.go:276] 2 containers: [b73a0038804f 27a315e0e1d2]
	I0708 13:03:52.841823    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:03:52.854328    3932 logs.go:276] 2 containers: [995ff223681d 663e148eab2d]
	I0708 13:03:52.854401    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:03:52.865983    3932 logs.go:276] 1 containers: [632152eccf25]
	I0708 13:03:52.866062    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:03:52.877470    3932 logs.go:276] 2 containers: [caa2559e6578 572a7b23b33d]
	I0708 13:03:52.877540    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:03:52.898430    3932 logs.go:276] 1 containers: [7fc889e2cef6]
	I0708 13:03:52.898495    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:03:52.909275    3932 logs.go:276] 2 containers: [364e7abdea37 ab6316c47d83]
	I0708 13:03:52.909341    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:03:52.920031    3932 logs.go:276] 0 containers: []
	W0708 13:03:52.920044    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:03:52.920103    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:03:52.931072    3932 logs.go:276] 2 containers: [aed1a674fd24 374ea76eccc3]
	I0708 13:03:52.931090    3932 logs.go:123] Gathering logs for kube-proxy [7fc889e2cef6] ...
	I0708 13:03:52.931095    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc889e2cef6"
	I0708 13:03:52.943101    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:03:52.943111    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:03:52.947752    3932 logs.go:123] Gathering logs for kube-apiserver [27a315e0e1d2] ...
	I0708 13:03:52.947758    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a315e0e1d2"
	I0708 13:03:52.960247    3932 logs.go:123] Gathering logs for etcd [663e148eab2d] ...
	I0708 13:03:52.960260    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 663e148eab2d"
	I0708 13:03:52.972036    3932 logs.go:123] Gathering logs for kube-scheduler [caa2559e6578] ...
	I0708 13:03:52.972048    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa2559e6578"
	I0708 13:03:52.987418    3932 logs.go:123] Gathering logs for kube-apiserver [b73a0038804f] ...
	I0708 13:03:52.987430    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b73a0038804f"
	I0708 13:03:53.002112    3932 logs.go:123] Gathering logs for storage-provisioner [aed1a674fd24] ...
	I0708 13:03:53.002125    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed1a674fd24"
	I0708 13:03:53.014184    3932 logs.go:123] Gathering logs for storage-provisioner [374ea76eccc3] ...
	I0708 13:03:53.014194    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374ea76eccc3"
	I0708 13:03:53.027306    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:03:53.027317    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:03:53.052864    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:03:53.052872    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:03:53.096182    3932 logs.go:123] Gathering logs for kube-scheduler [572a7b23b33d] ...
	I0708 13:03:53.096192    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 572a7b23b33d"
	I0708 13:03:53.115472    3932 logs.go:123] Gathering logs for kube-controller-manager [364e7abdea37] ...
	I0708 13:03:53.115482    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 364e7abdea37"
	I0708 13:03:53.133682    3932 logs.go:123] Gathering logs for kube-controller-manager [ab6316c47d83] ...
	I0708 13:03:53.133695    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab6316c47d83"
	I0708 13:03:53.147680    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:03:53.147691    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:03:53.183864    3932 logs.go:123] Gathering logs for etcd [995ff223681d] ...
	I0708 13:03:53.183876    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995ff223681d"
	I0708 13:03:53.203791    3932 logs.go:123] Gathering logs for coredns [632152eccf25] ...
	I0708 13:03:53.203802    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 632152eccf25"
	I0708 13:03:53.215393    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:03:53.215406    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:03:55.729468    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:04:00.730255    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:04:00.730370    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:04:00.742546    3932 logs.go:276] 2 containers: [b73a0038804f 27a315e0e1d2]
	I0708 13:04:00.742635    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:04:00.754394    3932 logs.go:276] 2 containers: [995ff223681d 663e148eab2d]
	I0708 13:04:00.754483    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:04:00.765829    3932 logs.go:276] 1 containers: [632152eccf25]
	I0708 13:04:00.765903    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:04:00.778570    3932 logs.go:276] 2 containers: [caa2559e6578 572a7b23b33d]
	I0708 13:04:00.778647    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:04:00.789875    3932 logs.go:276] 1 containers: [7fc889e2cef6]
	I0708 13:04:00.789950    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:04:00.805324    3932 logs.go:276] 2 containers: [364e7abdea37 ab6316c47d83]
	I0708 13:04:00.805390    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:04:00.816119    3932 logs.go:276] 0 containers: []
	W0708 13:04:00.816131    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:04:00.816194    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:04:00.828347    3932 logs.go:276] 2 containers: [aed1a674fd24 374ea76eccc3]
	I0708 13:04:00.828367    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:04:00.828372    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:04:00.855291    3932 logs.go:123] Gathering logs for kube-apiserver [27a315e0e1d2] ...
	I0708 13:04:00.855311    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a315e0e1d2"
	I0708 13:04:00.869862    3932 logs.go:123] Gathering logs for kube-proxy [7fc889e2cef6] ...
	I0708 13:04:00.869875    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc889e2cef6"
	I0708 13:04:00.883110    3932 logs.go:123] Gathering logs for storage-provisioner [aed1a674fd24] ...
	I0708 13:04:00.883122    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed1a674fd24"
	I0708 13:04:00.895932    3932 logs.go:123] Gathering logs for kube-scheduler [572a7b23b33d] ...
	I0708 13:04:00.895943    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 572a7b23b33d"
	I0708 13:04:00.913122    3932 logs.go:123] Gathering logs for kube-controller-manager [ab6316c47d83] ...
	I0708 13:04:00.913134    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab6316c47d83"
	I0708 13:04:00.930187    3932 logs.go:123] Gathering logs for storage-provisioner [374ea76eccc3] ...
	I0708 13:04:00.930201    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374ea76eccc3"
	I0708 13:04:00.943450    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:04:00.943466    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:04:00.959970    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:04:00.959983    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:04:01.002593    3932 logs.go:123] Gathering logs for kube-apiserver [b73a0038804f] ...
	I0708 13:04:01.002614    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b73a0038804f"
	I0708 13:04:01.017719    3932 logs.go:123] Gathering logs for kube-scheduler [caa2559e6578] ...
	I0708 13:04:01.017733    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa2559e6578"
	I0708 13:04:01.035039    3932 logs.go:123] Gathering logs for etcd [995ff223681d] ...
	I0708 13:04:01.035050    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995ff223681d"
	I0708 13:04:01.053955    3932 logs.go:123] Gathering logs for etcd [663e148eab2d] ...
	I0708 13:04:01.053973    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 663e148eab2d"
	I0708 13:04:01.072418    3932 logs.go:123] Gathering logs for kube-controller-manager [364e7abdea37] ...
	I0708 13:04:01.072431    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 364e7abdea37"
	I0708 13:04:01.092771    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:04:01.092784    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:04:01.097609    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:04:01.097621    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:04:01.138850    3932 logs.go:123] Gathering logs for coredns [632152eccf25] ...
	I0708 13:04:01.138863    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 632152eccf25"
	I0708 13:04:03.659529    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:04:08.661731    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:04:08.662176    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:04:08.701681    3932 logs.go:276] 2 containers: [b73a0038804f 27a315e0e1d2]
	I0708 13:04:08.701825    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:04:08.729780    3932 logs.go:276] 2 containers: [995ff223681d 663e148eab2d]
	I0708 13:04:08.729876    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:04:08.748298    3932 logs.go:276] 1 containers: [632152eccf25]
	I0708 13:04:08.748372    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:04:08.760298    3932 logs.go:276] 2 containers: [caa2559e6578 572a7b23b33d]
	I0708 13:04:08.760371    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:04:08.771117    3932 logs.go:276] 1 containers: [7fc889e2cef6]
	I0708 13:04:08.771186    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:04:08.781838    3932 logs.go:276] 2 containers: [364e7abdea37 ab6316c47d83]
	I0708 13:04:08.781897    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:04:08.792302    3932 logs.go:276] 0 containers: []
	W0708 13:04:08.792312    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:04:08.792368    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:04:08.803494    3932 logs.go:276] 2 containers: [aed1a674fd24 374ea76eccc3]
	I0708 13:04:08.803511    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:04:08.803517    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:04:08.843402    3932 logs.go:123] Gathering logs for etcd [663e148eab2d] ...
	I0708 13:04:08.843412    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 663e148eab2d"
	I0708 13:04:08.855314    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:04:08.855327    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:04:08.860245    3932 logs.go:123] Gathering logs for kube-controller-manager [ab6316c47d83] ...
	I0708 13:04:08.860253    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab6316c47d83"
	I0708 13:04:08.877640    3932 logs.go:123] Gathering logs for storage-provisioner [374ea76eccc3] ...
	I0708 13:04:08.877653    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374ea76eccc3"
	I0708 13:04:08.888722    3932 logs.go:123] Gathering logs for kube-scheduler [572a7b23b33d] ...
	I0708 13:04:08.888734    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 572a7b23b33d"
	I0708 13:04:08.904281    3932 logs.go:123] Gathering logs for kube-apiserver [b73a0038804f] ...
	I0708 13:04:08.904294    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b73a0038804f"
	I0708 13:04:08.918601    3932 logs.go:123] Gathering logs for kube-scheduler [caa2559e6578] ...
	I0708 13:04:08.918614    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa2559e6578"
	I0708 13:04:08.930282    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:04:08.930293    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:04:08.972339    3932 logs.go:123] Gathering logs for etcd [995ff223681d] ...
	I0708 13:04:08.972350    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995ff223681d"
	I0708 13:04:08.986428    3932 logs.go:123] Gathering logs for coredns [632152eccf25] ...
	I0708 13:04:08.986440    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 632152eccf25"
	I0708 13:04:08.998324    3932 logs.go:123] Gathering logs for kube-proxy [7fc889e2cef6] ...
	I0708 13:04:08.998335    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc889e2cef6"
	I0708 13:04:09.010003    3932 logs.go:123] Gathering logs for kube-controller-manager [364e7abdea37] ...
	I0708 13:04:09.010012    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 364e7abdea37"
	I0708 13:04:09.031564    3932 logs.go:123] Gathering logs for storage-provisioner [aed1a674fd24] ...
	I0708 13:04:09.031574    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed1a674fd24"
	I0708 13:04:09.042690    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:04:09.042698    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:04:09.066218    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:04:09.066229    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:04:09.077744    3932 logs.go:123] Gathering logs for kube-apiserver [27a315e0e1d2] ...
	I0708 13:04:09.077754    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a315e0e1d2"
	I0708 13:04:11.591694    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:04:16.593836    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:04:16.593994    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:04:16.605238    3932 logs.go:276] 2 containers: [b73a0038804f 27a315e0e1d2]
	I0708 13:04:16.605304    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:04:16.616166    3932 logs.go:276] 2 containers: [995ff223681d 663e148eab2d]
	I0708 13:04:16.616233    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:04:16.626648    3932 logs.go:276] 1 containers: [632152eccf25]
	I0708 13:04:16.626715    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:04:16.638264    3932 logs.go:276] 2 containers: [caa2559e6578 572a7b23b33d]
	I0708 13:04:16.638355    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:04:16.648997    3932 logs.go:276] 1 containers: [7fc889e2cef6]
	I0708 13:04:16.649062    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:04:16.660109    3932 logs.go:276] 2 containers: [364e7abdea37 ab6316c47d83]
	I0708 13:04:16.660180    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:04:16.670640    3932 logs.go:276] 0 containers: []
	W0708 13:04:16.670653    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:04:16.670711    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:04:16.682399    3932 logs.go:276] 2 containers: [aed1a674fd24 374ea76eccc3]
	I0708 13:04:16.682415    3932 logs.go:123] Gathering logs for kube-proxy [7fc889e2cef6] ...
	I0708 13:04:16.682421    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc889e2cef6"
	I0708 13:04:16.696824    3932 logs.go:123] Gathering logs for kube-controller-manager [ab6316c47d83] ...
	I0708 13:04:16.696836    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab6316c47d83"
	I0708 13:04:16.717869    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:04:16.717879    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:04:16.730850    3932 logs.go:123] Gathering logs for kube-apiserver [27a315e0e1d2] ...
	I0708 13:04:16.730864    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a315e0e1d2"
	I0708 13:04:16.744495    3932 logs.go:123] Gathering logs for etcd [663e148eab2d] ...
	I0708 13:04:16.744509    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 663e148eab2d"
	I0708 13:04:16.766515    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:04:16.766531    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:04:16.794924    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:04:16.794941    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:04:16.840344    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:04:16.840366    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:04:16.845599    3932 logs.go:123] Gathering logs for kube-apiserver [b73a0038804f] ...
	I0708 13:04:16.845612    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b73a0038804f"
	I0708 13:04:16.869122    3932 logs.go:123] Gathering logs for etcd [995ff223681d] ...
	I0708 13:04:16.869136    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995ff223681d"
	I0708 13:04:16.884287    3932 logs.go:123] Gathering logs for storage-provisioner [aed1a674fd24] ...
	I0708 13:04:16.884307    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed1a674fd24"
	I0708 13:04:16.897269    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:04:16.897282    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:04:16.938339    3932 logs.go:123] Gathering logs for coredns [632152eccf25] ...
	I0708 13:04:16.938355    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 632152eccf25"
	I0708 13:04:16.958306    3932 logs.go:123] Gathering logs for kube-scheduler [caa2559e6578] ...
	I0708 13:04:16.958317    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa2559e6578"
	I0708 13:04:16.971481    3932 logs.go:123] Gathering logs for kube-scheduler [572a7b23b33d] ...
	I0708 13:04:16.971492    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 572a7b23b33d"
	I0708 13:04:16.987203    3932 logs.go:123] Gathering logs for kube-controller-manager [364e7abdea37] ...
	I0708 13:04:16.987214    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 364e7abdea37"
	I0708 13:04:17.009590    3932 logs.go:123] Gathering logs for storage-provisioner [374ea76eccc3] ...
	I0708 13:04:17.009602    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374ea76eccc3"
	I0708 13:04:19.528069    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:04:24.530851    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:04:24.531254    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:04:24.572043    3932 logs.go:276] 2 containers: [b73a0038804f 27a315e0e1d2]
	I0708 13:04:24.572184    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:04:24.601249    3932 logs.go:276] 2 containers: [995ff223681d 663e148eab2d]
	I0708 13:04:24.601349    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:04:24.615460    3932 logs.go:276] 1 containers: [632152eccf25]
	I0708 13:04:24.615537    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:04:24.626780    3932 logs.go:276] 2 containers: [caa2559e6578 572a7b23b33d]
	I0708 13:04:24.626857    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:04:24.637223    3932 logs.go:276] 1 containers: [7fc889e2cef6]
	I0708 13:04:24.637285    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:04:24.649215    3932 logs.go:276] 2 containers: [364e7abdea37 ab6316c47d83]
	I0708 13:04:24.649279    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:04:24.659010    3932 logs.go:276] 0 containers: []
	W0708 13:04:24.659024    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:04:24.659075    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:04:24.669373    3932 logs.go:276] 2 containers: [aed1a674fd24 374ea76eccc3]
	I0708 13:04:24.669391    3932 logs.go:123] Gathering logs for etcd [663e148eab2d] ...
	I0708 13:04:24.669396    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 663e148eab2d"
	I0708 13:04:24.680557    3932 logs.go:123] Gathering logs for kube-proxy [7fc889e2cef6] ...
	I0708 13:04:24.680566    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc889e2cef6"
	I0708 13:04:24.692937    3932 logs.go:123] Gathering logs for kube-controller-manager [364e7abdea37] ...
	I0708 13:04:24.692949    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 364e7abdea37"
	I0708 13:04:24.711887    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:04:24.711898    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:04:24.723759    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:04:24.723770    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:04:24.765710    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:04:24.765720    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:04:24.800381    3932 logs.go:123] Gathering logs for kube-apiserver [27a315e0e1d2] ...
	I0708 13:04:24.800391    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a315e0e1d2"
	I0708 13:04:24.812833    3932 logs.go:123] Gathering logs for kube-controller-manager [ab6316c47d83] ...
	I0708 13:04:24.812842    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab6316c47d83"
	I0708 13:04:24.827353    3932 logs.go:123] Gathering logs for storage-provisioner [aed1a674fd24] ...
	I0708 13:04:24.827364    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed1a674fd24"
	I0708 13:04:24.839378    3932 logs.go:123] Gathering logs for storage-provisioner [374ea76eccc3] ...
	I0708 13:04:24.839392    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374ea76eccc3"
	I0708 13:04:24.853854    3932 logs.go:123] Gathering logs for etcd [995ff223681d] ...
	I0708 13:04:24.853864    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995ff223681d"
	I0708 13:04:24.867931    3932 logs.go:123] Gathering logs for coredns [632152eccf25] ...
	I0708 13:04:24.867944    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 632152eccf25"
	I0708 13:04:24.879242    3932 logs.go:123] Gathering logs for kube-scheduler [caa2559e6578] ...
	I0708 13:04:24.879253    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa2559e6578"
	I0708 13:04:24.891093    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:04:24.891106    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:04:24.895379    3932 logs.go:123] Gathering logs for kube-apiserver [b73a0038804f] ...
	I0708 13:04:24.895384    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b73a0038804f"
	I0708 13:04:24.908892    3932 logs.go:123] Gathering logs for kube-scheduler [572a7b23b33d] ...
	I0708 13:04:24.908902    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 572a7b23b33d"
	I0708 13:04:24.923575    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:04:24.923587    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:04:27.449337    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:04:32.452195    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:04:32.452733    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:04:32.490471    3932 logs.go:276] 2 containers: [b73a0038804f 27a315e0e1d2]
	I0708 13:04:32.490600    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:04:32.509538    3932 logs.go:276] 2 containers: [995ff223681d 663e148eab2d]
	I0708 13:04:32.509635    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:04:32.522282    3932 logs.go:276] 1 containers: [632152eccf25]
	I0708 13:04:32.522356    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:04:32.533232    3932 logs.go:276] 2 containers: [caa2559e6578 572a7b23b33d]
	I0708 13:04:32.533316    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:04:32.544000    3932 logs.go:276] 1 containers: [7fc889e2cef6]
	I0708 13:04:32.544066    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:04:32.554737    3932 logs.go:276] 2 containers: [364e7abdea37 ab6316c47d83]
	I0708 13:04:32.554809    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:04:32.565177    3932 logs.go:276] 0 containers: []
	W0708 13:04:32.565188    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:04:32.565246    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:04:32.576064    3932 logs.go:276] 2 containers: [aed1a674fd24 374ea76eccc3]
	I0708 13:04:32.576083    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:04:32.576088    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:04:32.580748    3932 logs.go:123] Gathering logs for etcd [995ff223681d] ...
	I0708 13:04:32.580755    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995ff223681d"
	I0708 13:04:32.594444    3932 logs.go:123] Gathering logs for kube-scheduler [caa2559e6578] ...
	I0708 13:04:32.594457    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa2559e6578"
	I0708 13:04:32.614099    3932 logs.go:123] Gathering logs for kube-scheduler [572a7b23b33d] ...
	I0708 13:04:32.614114    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 572a7b23b33d"
	I0708 13:04:32.629516    3932 logs.go:123] Gathering logs for kube-proxy [7fc889e2cef6] ...
	I0708 13:04:32.629526    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc889e2cef6"
	I0708 13:04:32.645455    3932 logs.go:123] Gathering logs for storage-provisioner [374ea76eccc3] ...
	I0708 13:04:32.645465    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374ea76eccc3"
	I0708 13:04:32.656231    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:04:32.656241    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:04:32.698024    3932 logs.go:123] Gathering logs for coredns [632152eccf25] ...
	I0708 13:04:32.698039    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 632152eccf25"
	I0708 13:04:32.709532    3932 logs.go:123] Gathering logs for kube-controller-manager [ab6316c47d83] ...
	I0708 13:04:32.709543    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab6316c47d83"
	I0708 13:04:32.723928    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:04:32.723939    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:04:32.748118    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:04:32.748125    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:04:32.759956    3932 logs.go:123] Gathering logs for storage-provisioner [aed1a674fd24] ...
	I0708 13:04:32.759968    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed1a674fd24"
	I0708 13:04:32.771839    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:04:32.771851    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:04:32.811550    3932 logs.go:123] Gathering logs for kube-apiserver [b73a0038804f] ...
	I0708 13:04:32.811558    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b73a0038804f"
	I0708 13:04:32.828251    3932 logs.go:123] Gathering logs for kube-apiserver [27a315e0e1d2] ...
	I0708 13:04:32.828260    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a315e0e1d2"
	I0708 13:04:32.840189    3932 logs.go:123] Gathering logs for etcd [663e148eab2d] ...
	I0708 13:04:32.840202    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 663e148eab2d"
	I0708 13:04:32.851159    3932 logs.go:123] Gathering logs for kube-controller-manager [364e7abdea37] ...
	I0708 13:04:32.851171    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 364e7abdea37"
	I0708 13:04:35.373897    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:04:40.376119    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:04:40.376269    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:04:40.388301    3932 logs.go:276] 2 containers: [b73a0038804f 27a315e0e1d2]
	I0708 13:04:40.388378    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:04:40.400329    3932 logs.go:276] 2 containers: [995ff223681d 663e148eab2d]
	I0708 13:04:40.400399    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:04:40.411891    3932 logs.go:276] 1 containers: [632152eccf25]
	I0708 13:04:40.411963    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:04:40.423317    3932 logs.go:276] 2 containers: [caa2559e6578 572a7b23b33d]
	I0708 13:04:40.423391    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:04:40.434046    3932 logs.go:276] 1 containers: [7fc889e2cef6]
	I0708 13:04:40.434116    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:04:40.445002    3932 logs.go:276] 2 containers: [364e7abdea37 ab6316c47d83]
	I0708 13:04:40.445070    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:04:40.456767    3932 logs.go:276] 0 containers: []
	W0708 13:04:40.456779    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:04:40.456837    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:04:40.469439    3932 logs.go:276] 2 containers: [aed1a674fd24 374ea76eccc3]
	I0708 13:04:40.469454    3932 logs.go:123] Gathering logs for kube-proxy [7fc889e2cef6] ...
	I0708 13:04:40.469466    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc889e2cef6"
	I0708 13:04:40.485111    3932 logs.go:123] Gathering logs for storage-provisioner [aed1a674fd24] ...
	I0708 13:04:40.485122    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed1a674fd24"
	I0708 13:04:40.501235    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:04:40.501245    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:04:40.535872    3932 logs.go:123] Gathering logs for kube-apiserver [b73a0038804f] ...
	I0708 13:04:40.535884    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b73a0038804f"
	I0708 13:04:40.549814    3932 logs.go:123] Gathering logs for etcd [663e148eab2d] ...
	I0708 13:04:40.549826    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 663e148eab2d"
	I0708 13:04:40.565425    3932 logs.go:123] Gathering logs for coredns [632152eccf25] ...
	I0708 13:04:40.565439    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 632152eccf25"
	I0708 13:04:40.576917    3932 logs.go:123] Gathering logs for kube-scheduler [caa2559e6578] ...
	I0708 13:04:40.576930    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa2559e6578"
	I0708 13:04:40.590032    3932 logs.go:123] Gathering logs for kube-controller-manager [364e7abdea37] ...
	I0708 13:04:40.590045    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 364e7abdea37"
	I0708 13:04:40.610284    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:04:40.610304    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:04:40.637573    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:04:40.637595    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:04:40.683137    3932 logs.go:123] Gathering logs for kube-apiserver [27a315e0e1d2] ...
	I0708 13:04:40.683155    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a315e0e1d2"
	I0708 13:04:40.702446    3932 logs.go:123] Gathering logs for etcd [995ff223681d] ...
	I0708 13:04:40.702459    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995ff223681d"
	I0708 13:04:40.723422    3932 logs.go:123] Gathering logs for storage-provisioner [374ea76eccc3] ...
	I0708 13:04:40.723446    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374ea76eccc3"
	I0708 13:04:40.736487    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:04:40.736498    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:04:40.750243    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:04:40.750254    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:04:40.755954    3932 logs.go:123] Gathering logs for kube-scheduler [572a7b23b33d] ...
	I0708 13:04:40.755965    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 572a7b23b33d"
	I0708 13:04:40.774893    3932 logs.go:123] Gathering logs for kube-controller-manager [ab6316c47d83] ...
	I0708 13:04:40.774906    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab6316c47d83"
	I0708 13:04:43.292541    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:04:48.294843    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:04:48.295122    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:04:48.330489    3932 logs.go:276] 2 containers: [b73a0038804f 27a315e0e1d2]
	I0708 13:04:48.330591    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:04:48.347490    3932 logs.go:276] 2 containers: [995ff223681d 663e148eab2d]
	I0708 13:04:48.347582    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:04:48.362136    3932 logs.go:276] 1 containers: [632152eccf25]
	I0708 13:04:48.362220    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:04:48.373510    3932 logs.go:276] 2 containers: [caa2559e6578 572a7b23b33d]
	I0708 13:04:48.373588    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:04:48.384041    3932 logs.go:276] 1 containers: [7fc889e2cef6]
	I0708 13:04:48.384108    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:04:48.394746    3932 logs.go:276] 2 containers: [364e7abdea37 ab6316c47d83]
	I0708 13:04:48.394806    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:04:48.405111    3932 logs.go:276] 0 containers: []
	W0708 13:04:48.405122    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:04:48.405179    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:04:48.416123    3932 logs.go:276] 2 containers: [aed1a674fd24 374ea76eccc3]
	I0708 13:04:48.416142    3932 logs.go:123] Gathering logs for etcd [663e148eab2d] ...
	I0708 13:04:48.416153    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 663e148eab2d"
	I0708 13:04:48.427773    3932 logs.go:123] Gathering logs for kube-scheduler [caa2559e6578] ...
	I0708 13:04:48.427789    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa2559e6578"
	I0708 13:04:48.440072    3932 logs.go:123] Gathering logs for kube-controller-manager [ab6316c47d83] ...
	I0708 13:04:48.440082    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab6316c47d83"
	I0708 13:04:48.453821    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:04:48.453831    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:04:48.467954    3932 logs.go:123] Gathering logs for storage-provisioner [374ea76eccc3] ...
	I0708 13:04:48.467967    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374ea76eccc3"
	I0708 13:04:48.479941    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:04:48.479955    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:04:48.504136    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:04:48.504142    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:04:48.539684    3932 logs.go:123] Gathering logs for etcd [995ff223681d] ...
	I0708 13:04:48.539696    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995ff223681d"
	I0708 13:04:48.553997    3932 logs.go:123] Gathering logs for kube-scheduler [572a7b23b33d] ...
	I0708 13:04:48.554011    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 572a7b23b33d"
	I0708 13:04:48.569211    3932 logs.go:123] Gathering logs for kube-proxy [7fc889e2cef6] ...
	I0708 13:04:48.569221    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc889e2cef6"
	I0708 13:04:48.580864    3932 logs.go:123] Gathering logs for storage-provisioner [aed1a674fd24] ...
	I0708 13:04:48.580877    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed1a674fd24"
	I0708 13:04:48.593202    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:04:48.593216    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:04:48.634828    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:04:48.634839    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:04:48.639295    3932 logs.go:123] Gathering logs for coredns [632152eccf25] ...
	I0708 13:04:48.639301    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 632152eccf25"
	I0708 13:04:48.650707    3932 logs.go:123] Gathering logs for kube-controller-manager [364e7abdea37] ...
	I0708 13:04:48.650717    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 364e7abdea37"
	I0708 13:04:48.668451    3932 logs.go:123] Gathering logs for kube-apiserver [b73a0038804f] ...
	I0708 13:04:48.668466    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b73a0038804f"
	I0708 13:04:48.687341    3932 logs.go:123] Gathering logs for kube-apiserver [27a315e0e1d2] ...
	I0708 13:04:48.687351    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a315e0e1d2"
	I0708 13:04:51.204041    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:04:56.204971    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:04:56.205099    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:04:56.233457    3932 logs.go:276] 2 containers: [b73a0038804f 27a315e0e1d2]
	I0708 13:04:56.233538    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:04:56.256556    3932 logs.go:276] 2 containers: [995ff223681d 663e148eab2d]
	I0708 13:04:56.256633    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:04:56.273121    3932 logs.go:276] 1 containers: [632152eccf25]
	I0708 13:04:56.273198    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:04:56.284308    3932 logs.go:276] 2 containers: [caa2559e6578 572a7b23b33d]
	I0708 13:04:56.284385    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:04:56.295643    3932 logs.go:276] 1 containers: [7fc889e2cef6]
	I0708 13:04:56.295724    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:04:56.307385    3932 logs.go:276] 2 containers: [364e7abdea37 ab6316c47d83]
	I0708 13:04:56.307455    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:04:56.318089    3932 logs.go:276] 0 containers: []
	W0708 13:04:56.318103    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:04:56.318166    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:04:56.329097    3932 logs.go:276] 2 containers: [aed1a674fd24 374ea76eccc3]
	I0708 13:04:56.329115    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:04:56.329121    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:04:56.333680    3932 logs.go:123] Gathering logs for kube-apiserver [b73a0038804f] ...
	I0708 13:04:56.333689    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b73a0038804f"
	I0708 13:04:56.348842    3932 logs.go:123] Gathering logs for kube-apiserver [27a315e0e1d2] ...
	I0708 13:04:56.348853    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a315e0e1d2"
	I0708 13:04:56.360958    3932 logs.go:123] Gathering logs for kube-proxy [7fc889e2cef6] ...
	I0708 13:04:56.360973    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc889e2cef6"
	I0708 13:04:56.373170    3932 logs.go:123] Gathering logs for kube-controller-manager [364e7abdea37] ...
	I0708 13:04:56.373182    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 364e7abdea37"
	I0708 13:04:56.391218    3932 logs.go:123] Gathering logs for kube-controller-manager [ab6316c47d83] ...
	I0708 13:04:56.391229    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab6316c47d83"
	I0708 13:04:56.406145    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:04:56.406158    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:04:56.418137    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:04:56.418152    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:04:56.459402    3932 logs.go:123] Gathering logs for coredns [632152eccf25] ...
	I0708 13:04:56.459411    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 632152eccf25"
	I0708 13:04:56.471182    3932 logs.go:123] Gathering logs for kube-scheduler [572a7b23b33d] ...
	I0708 13:04:56.471198    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 572a7b23b33d"
	I0708 13:04:56.486916    3932 logs.go:123] Gathering logs for storage-provisioner [374ea76eccc3] ...
	I0708 13:04:56.486927    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374ea76eccc3"
	I0708 13:04:56.499082    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:04:56.499094    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:04:56.537468    3932 logs.go:123] Gathering logs for etcd [663e148eab2d] ...
	I0708 13:04:56.537479    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 663e148eab2d"
	I0708 13:04:56.549890    3932 logs.go:123] Gathering logs for storage-provisioner [aed1a674fd24] ...
	I0708 13:04:56.549903    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed1a674fd24"
	I0708 13:04:56.562779    3932 logs.go:123] Gathering logs for etcd [995ff223681d] ...
	I0708 13:04:56.562792    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995ff223681d"
	I0708 13:04:56.577518    3932 logs.go:123] Gathering logs for kube-scheduler [caa2559e6578] ...
	I0708 13:04:56.577529    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa2559e6578"
	I0708 13:04:56.590570    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:04:56.590582    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:04:59.116578    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:05:04.118826    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:05:04.119271    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:05:04.155016    3932 logs.go:276] 2 containers: [b73a0038804f 27a315e0e1d2]
	I0708 13:05:04.155163    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:05:04.175422    3932 logs.go:276] 2 containers: [995ff223681d 663e148eab2d]
	I0708 13:05:04.175528    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:05:04.191223    3932 logs.go:276] 1 containers: [632152eccf25]
	I0708 13:05:04.191304    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:05:04.207910    3932 logs.go:276] 2 containers: [caa2559e6578 572a7b23b33d]
	I0708 13:05:04.207986    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:05:04.218415    3932 logs.go:276] 1 containers: [7fc889e2cef6]
	I0708 13:05:04.218488    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:05:04.230798    3932 logs.go:276] 2 containers: [364e7abdea37 ab6316c47d83]
	I0708 13:05:04.230871    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:05:04.249860    3932 logs.go:276] 0 containers: []
	W0708 13:05:04.249873    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:05:04.249930    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:05:04.263048    3932 logs.go:276] 2 containers: [aed1a674fd24 374ea76eccc3]
	I0708 13:05:04.263066    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:05:04.263070    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:05:04.286192    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:05:04.286201    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:05:04.298315    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:05:04.298327    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:05:04.302723    3932 logs.go:123] Gathering logs for kube-scheduler [caa2559e6578] ...
	I0708 13:05:04.302732    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa2559e6578"
	I0708 13:05:04.314326    3932 logs.go:123] Gathering logs for kube-scheduler [572a7b23b33d] ...
	I0708 13:05:04.314337    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 572a7b23b33d"
	I0708 13:05:04.329816    3932 logs.go:123] Gathering logs for storage-provisioner [374ea76eccc3] ...
	I0708 13:05:04.329827    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374ea76eccc3"
	I0708 13:05:04.341926    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:05:04.341938    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:05:04.383674    3932 logs.go:123] Gathering logs for coredns [632152eccf25] ...
	I0708 13:05:04.383687    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 632152eccf25"
	I0708 13:05:04.395509    3932 logs.go:123] Gathering logs for kube-controller-manager [ab6316c47d83] ...
	I0708 13:05:04.395520    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab6316c47d83"
	I0708 13:05:04.409962    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:05:04.409972    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:05:04.447026    3932 logs.go:123] Gathering logs for kube-apiserver [27a315e0e1d2] ...
	I0708 13:05:04.447036    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a315e0e1d2"
	I0708 13:05:04.459616    3932 logs.go:123] Gathering logs for kube-proxy [7fc889e2cef6] ...
	I0708 13:05:04.459626    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc889e2cef6"
	I0708 13:05:04.471515    3932 logs.go:123] Gathering logs for kube-controller-manager [364e7abdea37] ...
	I0708 13:05:04.471528    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 364e7abdea37"
	I0708 13:05:04.492813    3932 logs.go:123] Gathering logs for storage-provisioner [aed1a674fd24] ...
	I0708 13:05:04.492826    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed1a674fd24"
	I0708 13:05:04.504947    3932 logs.go:123] Gathering logs for kube-apiserver [b73a0038804f] ...
	I0708 13:05:04.504958    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b73a0038804f"
	I0708 13:05:04.519338    3932 logs.go:123] Gathering logs for etcd [995ff223681d] ...
	I0708 13:05:04.519347    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995ff223681d"
	I0708 13:05:04.533703    3932 logs.go:123] Gathering logs for etcd [663e148eab2d] ...
	I0708 13:05:04.533713    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 663e148eab2d"
	I0708 13:05:07.050643    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:05:12.052874    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:05:12.053172    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:05:12.082264    3932 logs.go:276] 2 containers: [b73a0038804f 27a315e0e1d2]
	I0708 13:05:12.082388    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:05:12.100572    3932 logs.go:276] 2 containers: [995ff223681d 663e148eab2d]
	I0708 13:05:12.100663    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:05:12.114078    3932 logs.go:276] 1 containers: [632152eccf25]
	I0708 13:05:12.114164    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:05:12.126037    3932 logs.go:276] 2 containers: [caa2559e6578 572a7b23b33d]
	I0708 13:05:12.126109    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:05:12.136598    3932 logs.go:276] 1 containers: [7fc889e2cef6]
	I0708 13:05:12.136661    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:05:12.147127    3932 logs.go:276] 2 containers: [364e7abdea37 ab6316c47d83]
	I0708 13:05:12.147194    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:05:12.157717    3932 logs.go:276] 0 containers: []
	W0708 13:05:12.157733    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:05:12.157787    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:05:12.168521    3932 logs.go:276] 2 containers: [aed1a674fd24 374ea76eccc3]
	I0708 13:05:12.168539    3932 logs.go:123] Gathering logs for kube-controller-manager [364e7abdea37] ...
	I0708 13:05:12.168545    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 364e7abdea37"
	I0708 13:05:12.186466    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:05:12.186479    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:05:12.193793    3932 logs.go:123] Gathering logs for etcd [663e148eab2d] ...
	I0708 13:05:12.193800    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 663e148eab2d"
	I0708 13:05:12.212704    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:05:12.212720    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:05:12.234902    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:05:12.234912    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:05:12.254368    3932 logs.go:123] Gathering logs for kube-apiserver [b73a0038804f] ...
	I0708 13:05:12.254382    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b73a0038804f"
	I0708 13:05:12.268505    3932 logs.go:123] Gathering logs for storage-provisioner [aed1a674fd24] ...
	I0708 13:05:12.268515    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed1a674fd24"
	I0708 13:05:12.280077    3932 logs.go:123] Gathering logs for kube-scheduler [572a7b23b33d] ...
	I0708 13:05:12.280087    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 572a7b23b33d"
	I0708 13:05:12.295894    3932 logs.go:123] Gathering logs for storage-provisioner [374ea76eccc3] ...
	I0708 13:05:12.295906    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374ea76eccc3"
	I0708 13:05:12.307661    3932 logs.go:123] Gathering logs for etcd [995ff223681d] ...
	I0708 13:05:12.307674    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995ff223681d"
	I0708 13:05:12.322010    3932 logs.go:123] Gathering logs for coredns [632152eccf25] ...
	I0708 13:05:12.322021    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 632152eccf25"
	I0708 13:05:12.333891    3932 logs.go:123] Gathering logs for kube-apiserver [27a315e0e1d2] ...
	I0708 13:05:12.333902    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a315e0e1d2"
	I0708 13:05:12.346356    3932 logs.go:123] Gathering logs for kube-scheduler [caa2559e6578] ...
	I0708 13:05:12.346368    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa2559e6578"
	I0708 13:05:12.358778    3932 logs.go:123] Gathering logs for kube-proxy [7fc889e2cef6] ...
	I0708 13:05:12.358788    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc889e2cef6"
	I0708 13:05:12.370783    3932 logs.go:123] Gathering logs for kube-controller-manager [ab6316c47d83] ...
	I0708 13:05:12.370794    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab6316c47d83"
	I0708 13:05:12.385675    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:05:12.385686    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:05:12.431192    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:05:12.431203    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:05:14.968827    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:05:19.971000    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:05:19.971288    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:05:20.000750    3932 logs.go:276] 2 containers: [b73a0038804f 27a315e0e1d2]
	I0708 13:05:20.000881    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:05:20.018680    3932 logs.go:276] 2 containers: [995ff223681d 663e148eab2d]
	I0708 13:05:20.018773    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:05:20.032646    3932 logs.go:276] 1 containers: [632152eccf25]
	I0708 13:05:20.032713    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:05:20.045758    3932 logs.go:276] 2 containers: [caa2559e6578 572a7b23b33d]
	I0708 13:05:20.045838    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:05:20.055977    3932 logs.go:276] 1 containers: [7fc889e2cef6]
	I0708 13:05:20.056044    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:05:20.066655    3932 logs.go:276] 2 containers: [364e7abdea37 ab6316c47d83]
	I0708 13:05:20.066725    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:05:20.077509    3932 logs.go:276] 0 containers: []
	W0708 13:05:20.077520    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:05:20.077580    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:05:20.088286    3932 logs.go:276] 2 containers: [aed1a674fd24 374ea76eccc3]
	I0708 13:05:20.088305    3932 logs.go:123] Gathering logs for kube-apiserver [b73a0038804f] ...
	I0708 13:05:20.088311    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b73a0038804f"
	I0708 13:05:20.104197    3932 logs.go:123] Gathering logs for kube-apiserver [27a315e0e1d2] ...
	I0708 13:05:20.104210    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a315e0e1d2"
	I0708 13:05:20.116693    3932 logs.go:123] Gathering logs for etcd [995ff223681d] ...
	I0708 13:05:20.116703    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995ff223681d"
	I0708 13:05:20.130844    3932 logs.go:123] Gathering logs for etcd [663e148eab2d] ...
	I0708 13:05:20.130854    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 663e148eab2d"
	I0708 13:05:20.142370    3932 logs.go:123] Gathering logs for coredns [632152eccf25] ...
	I0708 13:05:20.142383    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 632152eccf25"
	I0708 13:05:20.153814    3932 logs.go:123] Gathering logs for kube-proxy [7fc889e2cef6] ...
	I0708 13:05:20.153827    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc889e2cef6"
	I0708 13:05:20.165927    3932 logs.go:123] Gathering logs for storage-provisioner [aed1a674fd24] ...
	I0708 13:05:20.165938    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed1a674fd24"
	I0708 13:05:20.177391    3932 logs.go:123] Gathering logs for storage-provisioner [374ea76eccc3] ...
	I0708 13:05:20.177401    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374ea76eccc3"
	I0708 13:05:20.189050    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:05:20.189061    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:05:20.211137    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:05:20.211145    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:05:20.233339    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:05:20.233350    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:05:20.274738    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:05:20.274758    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:05:20.279822    3932 logs.go:123] Gathering logs for kube-scheduler [caa2559e6578] ...
	I0708 13:05:20.279832    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa2559e6578"
	I0708 13:05:20.291728    3932 logs.go:123] Gathering logs for kube-scheduler [572a7b23b33d] ...
	I0708 13:05:20.291739    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 572a7b23b33d"
	I0708 13:05:20.311441    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:05:20.311452    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:05:20.381287    3932 logs.go:123] Gathering logs for kube-controller-manager [364e7abdea37] ...
	I0708 13:05:20.381298    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 364e7abdea37"
	I0708 13:05:20.399100    3932 logs.go:123] Gathering logs for kube-controller-manager [ab6316c47d83] ...
	I0708 13:05:20.399109    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab6316c47d83"
	I0708 13:05:22.915470    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:05:27.917666    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:05:27.917911    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:05:27.940238    3932 logs.go:276] 2 containers: [b73a0038804f 27a315e0e1d2]
	I0708 13:05:27.940349    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:05:27.955776    3932 logs.go:276] 2 containers: [995ff223681d 663e148eab2d]
	I0708 13:05:27.955856    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:05:27.968611    3932 logs.go:276] 1 containers: [632152eccf25]
	I0708 13:05:27.968673    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:05:27.979635    3932 logs.go:276] 2 containers: [caa2559e6578 572a7b23b33d]
	I0708 13:05:27.979707    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:05:27.990389    3932 logs.go:276] 1 containers: [7fc889e2cef6]
	I0708 13:05:27.990456    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:05:28.000583    3932 logs.go:276] 2 containers: [364e7abdea37 ab6316c47d83]
	I0708 13:05:28.000650    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:05:28.010797    3932 logs.go:276] 0 containers: []
	W0708 13:05:28.010806    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:05:28.010862    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:05:28.022121    3932 logs.go:276] 2 containers: [aed1a674fd24 374ea76eccc3]
	I0708 13:05:28.022139    3932 logs.go:123] Gathering logs for etcd [663e148eab2d] ...
	I0708 13:05:28.022144    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 663e148eab2d"
	I0708 13:05:28.034841    3932 logs.go:123] Gathering logs for coredns [632152eccf25] ...
	I0708 13:05:28.034852    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 632152eccf25"
	I0708 13:05:28.045769    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:05:28.045779    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:05:28.050024    3932 logs.go:123] Gathering logs for kube-scheduler [caa2559e6578] ...
	I0708 13:05:28.050030    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa2559e6578"
	I0708 13:05:28.065475    3932 logs.go:123] Gathering logs for kube-controller-manager [ab6316c47d83] ...
	I0708 13:05:28.065485    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab6316c47d83"
	I0708 13:05:28.079037    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:05:28.079048    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:05:28.091373    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:05:28.091385    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:05:28.125723    3932 logs.go:123] Gathering logs for kube-apiserver [27a315e0e1d2] ...
	I0708 13:05:28.125734    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a315e0e1d2"
	I0708 13:05:28.137783    3932 logs.go:123] Gathering logs for kube-proxy [7fc889e2cef6] ...
	I0708 13:05:28.137793    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc889e2cef6"
	I0708 13:05:28.157383    3932 logs.go:123] Gathering logs for kube-controller-manager [364e7abdea37] ...
	I0708 13:05:28.157392    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 364e7abdea37"
	I0708 13:05:28.182312    3932 logs.go:123] Gathering logs for storage-provisioner [aed1a674fd24] ...
	I0708 13:05:28.182322    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed1a674fd24"
	I0708 13:05:28.195824    3932 logs.go:123] Gathering logs for storage-provisioner [374ea76eccc3] ...
	I0708 13:05:28.195834    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374ea76eccc3"
	I0708 13:05:28.207224    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:05:28.207236    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:05:28.231726    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:05:28.231734    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:05:28.273981    3932 logs.go:123] Gathering logs for kube-apiserver [b73a0038804f] ...
	I0708 13:05:28.273993    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b73a0038804f"
	I0708 13:05:28.288075    3932 logs.go:123] Gathering logs for etcd [995ff223681d] ...
	I0708 13:05:28.288086    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995ff223681d"
	I0708 13:05:28.309170    3932 logs.go:123] Gathering logs for kube-scheduler [572a7b23b33d] ...
	I0708 13:05:28.309179    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 572a7b23b33d"
	I0708 13:05:30.830625    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:05:35.832818    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:05:35.832899    3932 kubeadm.go:591] duration metric: took 4m4.849406875s to restartPrimaryControlPlane
	W0708 13:05:35.832951    3932 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0708 13:05:35.832970    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0708 13:05:36.814195    3932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 13:05:36.819225    3932 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 13:05:36.822147    3932 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 13:05:36.824737    3932 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 13:05:36.824742    3932 kubeadm.go:156] found existing configuration files:
	
	I0708 13:05:36.824762    3932 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50391 /etc/kubernetes/admin.conf
	I0708 13:05:36.827540    3932 kubeadm.go:162] "https://control-plane.minikube.internal:50391" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50391 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 13:05:36.827566    3932 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 13:05:36.830832    3932 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50391 /etc/kubernetes/kubelet.conf
	I0708 13:05:36.833502    3932 kubeadm.go:162] "https://control-plane.minikube.internal:50391" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50391 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 13:05:36.833528    3932 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 13:05:36.836396    3932 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50391 /etc/kubernetes/controller-manager.conf
	I0708 13:05:36.839370    3932 kubeadm.go:162] "https://control-plane.minikube.internal:50391" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50391 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 13:05:36.839397    3932 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 13:05:36.842570    3932 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50391 /etc/kubernetes/scheduler.conf
	I0708 13:05:36.844936    3932 kubeadm.go:162] "https://control-plane.minikube.internal:50391" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50391 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 13:05:36.844958    3932 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 13:05:36.847664    3932 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0708 13:05:36.864313    3932 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0708 13:05:36.864397    3932 kubeadm.go:309] [preflight] Running pre-flight checks
	I0708 13:05:36.914115    3932 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0708 13:05:36.914187    3932 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0708 13:05:36.914235    3932 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0708 13:05:36.963148    3932 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0708 13:05:36.973270    3932 out.go:204]   - Generating certificates and keys ...
	I0708 13:05:36.973305    3932 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0708 13:05:36.973336    3932 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0708 13:05:36.973380    3932 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0708 13:05:36.973410    3932 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0708 13:05:36.973442    3932 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0708 13:05:36.973467    3932 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0708 13:05:36.973507    3932 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0708 13:05:36.973545    3932 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0708 13:05:36.973595    3932 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0708 13:05:36.974365    3932 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0708 13:05:36.974400    3932 kubeadm.go:309] [certs] Using the existing "sa" key
	I0708 13:05:36.974455    3932 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0708 13:05:37.229152    3932 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0708 13:05:37.400058    3932 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0708 13:05:37.525109    3932 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0708 13:05:37.878723    3932 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0708 13:05:37.906056    3932 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0708 13:05:37.906670    3932 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0708 13:05:37.906691    3932 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0708 13:05:37.977527    3932 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0708 13:05:37.980012    3932 out.go:204]   - Booting up control plane ...
	I0708 13:05:37.980085    3932 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0708 13:05:37.980130    3932 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0708 13:05:37.980164    3932 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0708 13:05:37.980220    3932 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0708 13:05:37.980294    3932 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0708 13:05:42.481604    3932 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.502394 seconds
	I0708 13:05:42.481711    3932 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0708 13:05:42.485947    3932 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0708 13:05:42.997248    3932 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0708 13:05:42.997387    3932 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-129000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0708 13:05:43.504423    3932 kubeadm.go:309] [bootstrap-token] Using token: hifjt8.wy8jakd0xhx8lfx2
	I0708 13:05:43.510767    3932 out.go:204]   - Configuring RBAC rules ...
	I0708 13:05:43.510852    3932 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0708 13:05:43.510918    3932 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0708 13:05:43.513087    3932 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0708 13:05:43.514492    3932 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0708 13:05:43.515584    3932 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0708 13:05:43.516644    3932 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0708 13:05:43.520428    3932 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0708 13:05:43.698619    3932 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0708 13:05:43.909033    3932 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0708 13:05:43.909285    3932 kubeadm.go:309] 
	I0708 13:05:43.909316    3932 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0708 13:05:43.909321    3932 kubeadm.go:309] 
	I0708 13:05:43.909360    3932 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0708 13:05:43.909366    3932 kubeadm.go:309] 
	I0708 13:05:43.909412    3932 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0708 13:05:43.909444    3932 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0708 13:05:43.909518    3932 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0708 13:05:43.909576    3932 kubeadm.go:309] 
	I0708 13:05:43.909626    3932 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0708 13:05:43.909636    3932 kubeadm.go:309] 
	I0708 13:05:43.909661    3932 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0708 13:05:43.909671    3932 kubeadm.go:309] 
	I0708 13:05:43.909701    3932 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0708 13:05:43.909779    3932 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0708 13:05:43.909863    3932 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0708 13:05:43.909871    3932 kubeadm.go:309] 
	I0708 13:05:43.909912    3932 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0708 13:05:43.909949    3932 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0708 13:05:43.909955    3932 kubeadm.go:309] 
	I0708 13:05:43.909995    3932 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token hifjt8.wy8jakd0xhx8lfx2 \
	I0708 13:05:43.910055    3932 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:230a71526e00c18db9a0775e630de2fb59560bfeed9e976d05ee095d6c2f986e \
	I0708 13:05:43.910067    3932 kubeadm.go:309] 	--control-plane 
	I0708 13:05:43.910071    3932 kubeadm.go:309] 
	I0708 13:05:43.910126    3932 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0708 13:05:43.910132    3932 kubeadm.go:309] 
	I0708 13:05:43.910173    3932 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token hifjt8.wy8jakd0xhx8lfx2 \
	I0708 13:05:43.910230    3932 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:230a71526e00c18db9a0775e630de2fb59560bfeed9e976d05ee095d6c2f986e 
	I0708 13:05:43.910305    3932 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0708 13:05:43.910311    3932 cni.go:84] Creating CNI manager for ""
	I0708 13:05:43.910319    3932 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0708 13:05:43.914276    3932 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0708 13:05:43.919212    3932 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0708 13:05:43.922208    3932 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0708 13:05:43.927736    3932 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0708 13:05:43.927792    3932 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 13:05:43.927821    3932 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-129000 minikube.k8s.io/updated_at=2024_07_08T13_05_43_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad minikube.k8s.io/name=running-upgrade-129000 minikube.k8s.io/primary=true
	I0708 13:05:43.969611    3932 kubeadm.go:1107] duration metric: took 41.865875ms to wait for elevateKubeSystemPrivileges
	I0708 13:05:43.969676    3932 ops.go:34] apiserver oom_adj: -16
	W0708 13:05:43.969801    3932 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0708 13:05:43.969809    3932 kubeadm.go:393] duration metric: took 4m13.027804917s to StartCluster
	I0708 13:05:43.969818    3932 settings.go:142] acquiring lock: {Name:mka0c397a57d617e1d77508d22cc3adb2edf5927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 13:05:43.969906    3932 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 13:05:43.970301    3932 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/kubeconfig: {Name:mkd06393ca6fb9ad91b614216d70dbd8a552e45d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 13:05:43.970515    3932 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 13:05:43.970599    3932 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0708 13:05:43.970634    3932 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-129000"
	I0708 13:05:43.970648    3932 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-129000"
	W0708 13:05:43.970651    3932 addons.go:243] addon storage-provisioner should already be in state true
	I0708 13:05:43.970656    3932 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-129000"
	I0708 13:05:43.970665    3932 host.go:66] Checking if "running-upgrade-129000" exists ...
	I0708 13:05:43.970673    3932 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-129000"
	I0708 13:05:43.970713    3932 config.go:182] Loaded profile config "running-upgrade-129000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0708 13:05:43.971086    3932 retry.go:31] will retry after 1.101944488s: connect: dial unix /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/running-upgrade-129000/monitor: connect: connection refused
	I0708 13:05:43.971803    3932 kapi.go:59] client config for running-upgrade-129000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/running-upgrade-129000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/running-upgrade-129000/client.key", CAFile:"/Users/jenkins/minikube-integration/19195-1270/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1043634f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0708 13:05:43.972145    3932 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-129000"
	W0708 13:05:43.972150    3932 addons.go:243] addon default-storageclass should already be in state true
	I0708 13:05:43.972158    3932 host.go:66] Checking if "running-upgrade-129000" exists ...
	I0708 13:05:43.972693    3932 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0708 13:05:43.972698    3932 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0708 13:05:43.972703    3932 sshutil.go:53] new ssh client: &{IP:localhost Port:50359 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/running-upgrade-129000/id_rsa Username:docker}
	I0708 13:05:43.975116    3932 out.go:177] * Verifying Kubernetes components...
	I0708 13:05:43.981097    3932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 13:05:44.069172    3932 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 13:05:44.074258    3932 api_server.go:52] waiting for apiserver process to appear ...
	I0708 13:05:44.074302    3932 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 13:05:44.078236    3932 api_server.go:72] duration metric: took 107.713209ms to wait for apiserver process to appear ...
	I0708 13:05:44.078244    3932 api_server.go:88] waiting for apiserver healthz status ...
	I0708 13:05:44.078250    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:05:44.147717    3932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0708 13:05:45.080537    3932 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 13:05:45.084565    3932 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 13:05:45.084581    3932 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0708 13:05:45.084600    3932 sshutil.go:53] new ssh client: &{IP:localhost Port:50359 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/running-upgrade-129000/id_rsa Username:docker}
	I0708 13:05:45.142191    3932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 13:05:49.079023    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:05:49.079074    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:05:54.079435    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:05:54.079482    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:05:59.079710    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:05:59.079731    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:06:04.079813    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:06:04.079858    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:06:09.080004    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:06:09.080026    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:06:14.080268    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:06:14.080308    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0708 13:06:14.439129    3932 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0708 13:06:14.443583    3932 out.go:177] * Enabled addons: storage-provisioner
	I0708 13:06:14.449384    3932 addons.go:510] duration metric: took 30.479701917s for enable addons: enabled=[storage-provisioner]
	I0708 13:06:19.080648    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:06:19.080699    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:06:24.081182    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:06:24.081202    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:06:29.081775    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:06:29.081825    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:06:34.083006    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:06:34.083030    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:06:39.084507    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:06:39.084534    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:06:44.086090    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:06:44.086200    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:06:44.110553    3932 logs.go:276] 1 containers: [063efc38d81d]
	I0708 13:06:44.110630    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:06:44.122197    3932 logs.go:276] 1 containers: [52eda3d8b3e7]
	I0708 13:06:44.122268    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:06:44.132661    3932 logs.go:276] 2 containers: [f585feadba35 12a2164c7181]
	I0708 13:06:44.132732    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:06:44.143490    3932 logs.go:276] 1 containers: [bb65792657e6]
	I0708 13:06:44.143561    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:06:44.153782    3932 logs.go:276] 1 containers: [814e848a6031]
	I0708 13:06:44.153848    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:06:44.170011    3932 logs.go:276] 1 containers: [4829cb3c03a2]
	I0708 13:06:44.170081    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:06:44.180495    3932 logs.go:276] 0 containers: []
	W0708 13:06:44.180507    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:06:44.180567    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:06:44.190520    3932 logs.go:276] 1 containers: [059ae42247ca]
	I0708 13:06:44.190541    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:06:44.190547    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:06:44.227879    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:06:44.227886    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:06:44.269791    3932 logs.go:123] Gathering logs for kube-apiserver [063efc38d81d] ...
	I0708 13:06:44.269802    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063efc38d81d"
	I0708 13:06:44.284383    3932 logs.go:123] Gathering logs for kube-scheduler [bb65792657e6] ...
	I0708 13:06:44.284396    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65792657e6"
	I0708 13:06:44.305134    3932 logs.go:123] Gathering logs for kube-proxy [814e848a6031] ...
	I0708 13:06:44.305145    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 814e848a6031"
	I0708 13:06:44.317711    3932 logs.go:123] Gathering logs for kube-controller-manager [4829cb3c03a2] ...
	I0708 13:06:44.317721    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4829cb3c03a2"
	I0708 13:06:44.335036    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:06:44.335046    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:06:44.360319    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:06:44.360329    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:06:44.365120    3932 logs.go:123] Gathering logs for etcd [52eda3d8b3e7] ...
	I0708 13:06:44.365129    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52eda3d8b3e7"
	I0708 13:06:44.379102    3932 logs.go:123] Gathering logs for coredns [f585feadba35] ...
	I0708 13:06:44.379111    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f585feadba35"
	I0708 13:06:44.390594    3932 logs.go:123] Gathering logs for coredns [12a2164c7181] ...
	I0708 13:06:44.390605    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a2164c7181"
	I0708 13:06:44.402316    3932 logs.go:123] Gathering logs for storage-provisioner [059ae42247ca] ...
	I0708 13:06:44.402329    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059ae42247ca"
	I0708 13:06:44.413820    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:06:44.413834    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:06:46.925993    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:06:51.928159    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:06:51.928313    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:06:51.952164    3932 logs.go:276] 1 containers: [063efc38d81d]
	I0708 13:06:51.952258    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:06:51.964713    3932 logs.go:276] 1 containers: [52eda3d8b3e7]
	I0708 13:06:51.964785    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:06:51.976309    3932 logs.go:276] 2 containers: [f585feadba35 12a2164c7181]
	I0708 13:06:51.976386    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:06:51.986822    3932 logs.go:276] 1 containers: [bb65792657e6]
	I0708 13:06:51.986898    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:06:51.997465    3932 logs.go:276] 1 containers: [814e848a6031]
	I0708 13:06:51.997529    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:06:52.007628    3932 logs.go:276] 1 containers: [4829cb3c03a2]
	I0708 13:06:52.007698    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:06:52.017614    3932 logs.go:276] 0 containers: []
	W0708 13:06:52.017628    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:06:52.017688    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:06:52.028410    3932 logs.go:276] 1 containers: [059ae42247ca]
	I0708 13:06:52.028424    3932 logs.go:123] Gathering logs for kube-proxy [814e848a6031] ...
	I0708 13:06:52.028430    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 814e848a6031"
	I0708 13:06:52.040553    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:06:52.040564    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:06:52.065190    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:06:52.065197    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:06:52.104072    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:06:52.104079    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:06:52.139769    3932 logs.go:123] Gathering logs for kube-apiserver [063efc38d81d] ...
	I0708 13:06:52.139780    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063efc38d81d"
	I0708 13:06:52.155380    3932 logs.go:123] Gathering logs for kube-scheduler [bb65792657e6] ...
	I0708 13:06:52.155391    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65792657e6"
	I0708 13:06:52.170259    3932 logs.go:123] Gathering logs for kube-controller-manager [4829cb3c03a2] ...
	I0708 13:06:52.170270    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4829cb3c03a2"
	I0708 13:06:52.188861    3932 logs.go:123] Gathering logs for storage-provisioner [059ae42247ca] ...
	I0708 13:06:52.188871    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059ae42247ca"
	I0708 13:06:52.201175    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:06:52.201186    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:06:52.213553    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:06:52.213567    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:06:52.217876    3932 logs.go:123] Gathering logs for etcd [52eda3d8b3e7] ...
	I0708 13:06:52.217882    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52eda3d8b3e7"
	I0708 13:06:52.231534    3932 logs.go:123] Gathering logs for coredns [f585feadba35] ...
	I0708 13:06:52.231544    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f585feadba35"
	I0708 13:06:52.245131    3932 logs.go:123] Gathering logs for coredns [12a2164c7181] ...
	I0708 13:06:52.245142    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a2164c7181"
	I0708 13:06:54.758411    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:06:59.760592    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:06:59.760808    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:06:59.785316    3932 logs.go:276] 1 containers: [063efc38d81d]
	I0708 13:06:59.785421    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:06:59.804756    3932 logs.go:276] 1 containers: [52eda3d8b3e7]
	I0708 13:06:59.804842    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:06:59.817206    3932 logs.go:276] 2 containers: [f585feadba35 12a2164c7181]
	I0708 13:06:59.817286    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:06:59.828290    3932 logs.go:276] 1 containers: [bb65792657e6]
	I0708 13:06:59.828360    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:06:59.838435    3932 logs.go:276] 1 containers: [814e848a6031]
	I0708 13:06:59.838504    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:06:59.849571    3932 logs.go:276] 1 containers: [4829cb3c03a2]
	I0708 13:06:59.849629    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:06:59.859695    3932 logs.go:276] 0 containers: []
	W0708 13:06:59.859707    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:06:59.859769    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:06:59.870296    3932 logs.go:276] 1 containers: [059ae42247ca]
	I0708 13:06:59.870313    3932 logs.go:123] Gathering logs for kube-controller-manager [4829cb3c03a2] ...
	I0708 13:06:59.870319    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4829cb3c03a2"
	I0708 13:06:59.887955    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:06:59.887969    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:06:59.911789    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:06:59.911796    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:06:59.923284    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:06:59.923294    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:06:59.927850    3932 logs.go:123] Gathering logs for coredns [12a2164c7181] ...
	I0708 13:06:59.927858    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a2164c7181"
	I0708 13:06:59.943425    3932 logs.go:123] Gathering logs for kube-scheduler [bb65792657e6] ...
	I0708 13:06:59.943437    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65792657e6"
	I0708 13:06:59.957648    3932 logs.go:123] Gathering logs for etcd [52eda3d8b3e7] ...
	I0708 13:06:59.957662    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52eda3d8b3e7"
	I0708 13:06:59.971429    3932 logs.go:123] Gathering logs for coredns [f585feadba35] ...
	I0708 13:06:59.971443    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f585feadba35"
	I0708 13:06:59.983396    3932 logs.go:123] Gathering logs for kube-proxy [814e848a6031] ...
	I0708 13:06:59.983409    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 814e848a6031"
	I0708 13:06:59.995055    3932 logs.go:123] Gathering logs for storage-provisioner [059ae42247ca] ...
	I0708 13:06:59.995070    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059ae42247ca"
	I0708 13:07:00.006631    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:07:00.006642    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:07:00.043681    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:07:00.043690    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:07:00.078777    3932 logs.go:123] Gathering logs for kube-apiserver [063efc38d81d] ...
	I0708 13:07:00.078791    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063efc38d81d"
	I0708 13:07:02.594721    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:07:07.595815    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:07:07.596112    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:07:07.629970    3932 logs.go:276] 1 containers: [063efc38d81d]
	I0708 13:07:07.630100    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:07:07.650693    3932 logs.go:276] 1 containers: [52eda3d8b3e7]
	I0708 13:07:07.650792    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:07:07.664400    3932 logs.go:276] 2 containers: [f585feadba35 12a2164c7181]
	I0708 13:07:07.664478    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:07:07.676350    3932 logs.go:276] 1 containers: [bb65792657e6]
	I0708 13:07:07.676423    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:07:07.691586    3932 logs.go:276] 1 containers: [814e848a6031]
	I0708 13:07:07.691658    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:07:07.702689    3932 logs.go:276] 1 containers: [4829cb3c03a2]
	I0708 13:07:07.702761    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:07:07.713371    3932 logs.go:276] 0 containers: []
	W0708 13:07:07.713384    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:07:07.713440    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:07:07.723777    3932 logs.go:276] 1 containers: [059ae42247ca]
	I0708 13:07:07.723790    3932 logs.go:123] Gathering logs for coredns [f585feadba35] ...
	I0708 13:07:07.723795    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f585feadba35"
	I0708 13:07:07.735727    3932 logs.go:123] Gathering logs for kube-scheduler [bb65792657e6] ...
	I0708 13:07:07.735739    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65792657e6"
	I0708 13:07:07.750319    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:07:07.750339    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:07:07.789867    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:07:07.789877    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:07:07.794506    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:07:07.794513    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:07:07.830535    3932 logs.go:123] Gathering logs for kube-apiserver [063efc38d81d] ...
	I0708 13:07:07.830546    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063efc38d81d"
	I0708 13:07:07.850566    3932 logs.go:123] Gathering logs for storage-provisioner [059ae42247ca] ...
	I0708 13:07:07.850576    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059ae42247ca"
	I0708 13:07:07.862449    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:07:07.862463    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:07:07.885737    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:07:07.885748    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:07:07.898633    3932 logs.go:123] Gathering logs for etcd [52eda3d8b3e7] ...
	I0708 13:07:07.898644    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52eda3d8b3e7"
	I0708 13:07:07.912683    3932 logs.go:123] Gathering logs for coredns [12a2164c7181] ...
	I0708 13:07:07.912693    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a2164c7181"
	I0708 13:07:07.924523    3932 logs.go:123] Gathering logs for kube-proxy [814e848a6031] ...
	I0708 13:07:07.924533    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 814e848a6031"
	I0708 13:07:07.939207    3932 logs.go:123] Gathering logs for kube-controller-manager [4829cb3c03a2] ...
	I0708 13:07:07.939221    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4829cb3c03a2"
	I0708 13:07:10.458654    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:07:15.459183    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:07:15.459344    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:07:15.477495    3932 logs.go:276] 1 containers: [063efc38d81d]
	I0708 13:07:15.477582    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:07:15.489167    3932 logs.go:276] 1 containers: [52eda3d8b3e7]
	I0708 13:07:15.489232    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:07:15.499416    3932 logs.go:276] 2 containers: [f585feadba35 12a2164c7181]
	I0708 13:07:15.499480    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:07:15.509734    3932 logs.go:276] 1 containers: [bb65792657e6]
	I0708 13:07:15.509805    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:07:15.523893    3932 logs.go:276] 1 containers: [814e848a6031]
	I0708 13:07:15.523966    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:07:15.534309    3932 logs.go:276] 1 containers: [4829cb3c03a2]
	I0708 13:07:15.534375    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:07:15.547257    3932 logs.go:276] 0 containers: []
	W0708 13:07:15.547269    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:07:15.547325    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:07:15.557602    3932 logs.go:276] 1 containers: [059ae42247ca]
	I0708 13:07:15.557618    3932 logs.go:123] Gathering logs for etcd [52eda3d8b3e7] ...
	I0708 13:07:15.557624    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52eda3d8b3e7"
	I0708 13:07:15.571534    3932 logs.go:123] Gathering logs for coredns [12a2164c7181] ...
	I0708 13:07:15.571547    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a2164c7181"
	I0708 13:07:15.584039    3932 logs.go:123] Gathering logs for kube-scheduler [bb65792657e6] ...
	I0708 13:07:15.584052    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65792657e6"
	I0708 13:07:15.598469    3932 logs.go:123] Gathering logs for storage-provisioner [059ae42247ca] ...
	I0708 13:07:15.598480    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059ae42247ca"
	I0708 13:07:15.609956    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:07:15.609965    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:07:15.624241    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:07:15.624251    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:07:15.659023    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:07:15.659036    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:07:15.664369    3932 logs.go:123] Gathering logs for kube-apiserver [063efc38d81d] ...
	I0708 13:07:15.664378    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063efc38d81d"
	I0708 13:07:15.680563    3932 logs.go:123] Gathering logs for coredns [f585feadba35] ...
	I0708 13:07:15.680574    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f585feadba35"
	I0708 13:07:15.692088    3932 logs.go:123] Gathering logs for kube-proxy [814e848a6031] ...
	I0708 13:07:15.692102    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 814e848a6031"
	I0708 13:07:15.704117    3932 logs.go:123] Gathering logs for kube-controller-manager [4829cb3c03a2] ...
	I0708 13:07:15.704126    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4829cb3c03a2"
	I0708 13:07:15.722135    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:07:15.722147    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:07:15.746775    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:07:15.746783    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:07:18.287431    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:07:23.289701    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:07:23.289967    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:07:23.307238    3932 logs.go:276] 1 containers: [063efc38d81d]
	I0708 13:07:23.307333    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:07:23.320788    3932 logs.go:276] 1 containers: [52eda3d8b3e7]
	I0708 13:07:23.320942    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:07:23.333957    3932 logs.go:276] 2 containers: [f585feadba35 12a2164c7181]
	I0708 13:07:23.334020    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:07:23.344542    3932 logs.go:276] 1 containers: [bb65792657e6]
	I0708 13:07:23.344614    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:07:23.355654    3932 logs.go:276] 1 containers: [814e848a6031]
	I0708 13:07:23.355722    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:07:23.366108    3932 logs.go:276] 1 containers: [4829cb3c03a2]
	I0708 13:07:23.366177    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:07:23.375777    3932 logs.go:276] 0 containers: []
	W0708 13:07:23.375788    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:07:23.375842    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:07:23.390221    3932 logs.go:276] 1 containers: [059ae42247ca]
	I0708 13:07:23.390238    3932 logs.go:123] Gathering logs for kube-apiserver [063efc38d81d] ...
	I0708 13:07:23.390243    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063efc38d81d"
	I0708 13:07:23.405009    3932 logs.go:123] Gathering logs for coredns [f585feadba35] ...
	I0708 13:07:23.405023    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f585feadba35"
	I0708 13:07:23.417146    3932 logs.go:123] Gathering logs for coredns [12a2164c7181] ...
	I0708 13:07:23.417158    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a2164c7181"
	I0708 13:07:23.428714    3932 logs.go:123] Gathering logs for kube-scheduler [bb65792657e6] ...
	I0708 13:07:23.428725    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65792657e6"
	I0708 13:07:23.443769    3932 logs.go:123] Gathering logs for kube-proxy [814e848a6031] ...
	I0708 13:07:23.443783    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 814e848a6031"
	I0708 13:07:23.455873    3932 logs.go:123] Gathering logs for kube-controller-manager [4829cb3c03a2] ...
	I0708 13:07:23.455884    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4829cb3c03a2"
	I0708 13:07:23.473825    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:07:23.473836    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:07:23.478611    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:07:23.478619    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:07:23.517862    3932 logs.go:123] Gathering logs for storage-provisioner [059ae42247ca] ...
	I0708 13:07:23.517873    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059ae42247ca"
	I0708 13:07:23.529864    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:07:23.529874    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:07:23.541127    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:07:23.541137    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:07:23.564247    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:07:23.564255    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:07:23.601206    3932 logs.go:123] Gathering logs for etcd [52eda3d8b3e7] ...
	I0708 13:07:23.601212    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52eda3d8b3e7"
	I0708 13:07:26.117126    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:07:31.119335    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:07:31.119555    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:07:31.147378    3932 logs.go:276] 1 containers: [063efc38d81d]
	I0708 13:07:31.147497    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:07:31.163419    3932 logs.go:276] 1 containers: [52eda3d8b3e7]
	I0708 13:07:31.163494    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:07:31.180280    3932 logs.go:276] 2 containers: [f585feadba35 12a2164c7181]
	I0708 13:07:31.180349    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:07:31.191796    3932 logs.go:276] 1 containers: [bb65792657e6]
	I0708 13:07:31.191860    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:07:31.202459    3932 logs.go:276] 1 containers: [814e848a6031]
	I0708 13:07:31.202533    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:07:31.213263    3932 logs.go:276] 1 containers: [4829cb3c03a2]
	I0708 13:07:31.213332    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:07:31.223985    3932 logs.go:276] 0 containers: []
	W0708 13:07:31.223997    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:07:31.224054    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:07:31.238856    3932 logs.go:276] 1 containers: [059ae42247ca]
	I0708 13:07:31.238870    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:07:31.238876    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:07:31.243834    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:07:31.243842    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:07:31.281046    3932 logs.go:123] Gathering logs for kube-apiserver [063efc38d81d] ...
	I0708 13:07:31.281057    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063efc38d81d"
	I0708 13:07:31.296304    3932 logs.go:123] Gathering logs for etcd [52eda3d8b3e7] ...
	I0708 13:07:31.296315    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52eda3d8b3e7"
	I0708 13:07:31.312128    3932 logs.go:123] Gathering logs for coredns [12a2164c7181] ...
	I0708 13:07:31.312141    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a2164c7181"
	I0708 13:07:31.324272    3932 logs.go:123] Gathering logs for kube-controller-manager [4829cb3c03a2] ...
	I0708 13:07:31.324283    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4829cb3c03a2"
	I0708 13:07:31.343647    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:07:31.343659    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:07:31.369029    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:07:31.369040    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:07:31.409812    3932 logs.go:123] Gathering logs for kube-scheduler [bb65792657e6] ...
	I0708 13:07:31.409826    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65792657e6"
	I0708 13:07:31.424932    3932 logs.go:123] Gathering logs for kube-proxy [814e848a6031] ...
	I0708 13:07:31.424943    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 814e848a6031"
	I0708 13:07:31.437377    3932 logs.go:123] Gathering logs for storage-provisioner [059ae42247ca] ...
	I0708 13:07:31.437388    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059ae42247ca"
	I0708 13:07:31.449846    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:07:31.449858    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:07:31.461916    3932 logs.go:123] Gathering logs for coredns [f585feadba35] ...
	I0708 13:07:31.461927    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f585feadba35"
	I0708 13:07:33.976400    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:07:38.976245    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:07:38.976362    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:07:38.988731    3932 logs.go:276] 1 containers: [063efc38d81d]
	I0708 13:07:38.988810    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:07:38.999248    3932 logs.go:276] 1 containers: [52eda3d8b3e7]
	I0708 13:07:38.999316    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:07:39.009673    3932 logs.go:276] 2 containers: [f585feadba35 12a2164c7181]
	I0708 13:07:39.009746    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:07:39.020203    3932 logs.go:276] 1 containers: [bb65792657e6]
	I0708 13:07:39.020267    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:07:39.030618    3932 logs.go:276] 1 containers: [814e848a6031]
	I0708 13:07:39.030693    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:07:39.049270    3932 logs.go:276] 1 containers: [4829cb3c03a2]
	I0708 13:07:39.049342    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:07:39.064700    3932 logs.go:276] 0 containers: []
	W0708 13:07:39.064711    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:07:39.064771    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:07:39.075591    3932 logs.go:276] 1 containers: [059ae42247ca]
	I0708 13:07:39.075604    3932 logs.go:123] Gathering logs for coredns [12a2164c7181] ...
	I0708 13:07:39.075609    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a2164c7181"
	I0708 13:07:39.091726    3932 logs.go:123] Gathering logs for storage-provisioner [059ae42247ca] ...
	I0708 13:07:39.091736    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059ae42247ca"
	I0708 13:07:39.103219    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:07:39.103230    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:07:39.126108    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:07:39.126115    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:07:39.161317    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:07:39.161331    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:07:39.166266    3932 logs.go:123] Gathering logs for kube-apiserver [063efc38d81d] ...
	I0708 13:07:39.166274    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063efc38d81d"
	I0708 13:07:39.180687    3932 logs.go:123] Gathering logs for etcd [52eda3d8b3e7] ...
	I0708 13:07:39.180697    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52eda3d8b3e7"
	I0708 13:07:39.194413    3932 logs.go:123] Gathering logs for coredns [f585feadba35] ...
	I0708 13:07:39.194424    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f585feadba35"
	I0708 13:07:39.205675    3932 logs.go:123] Gathering logs for kube-scheduler [bb65792657e6] ...
	I0708 13:07:39.205687    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65792657e6"
	I0708 13:07:39.220922    3932 logs.go:123] Gathering logs for kube-proxy [814e848a6031] ...
	I0708 13:07:39.220934    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 814e848a6031"
	I0708 13:07:39.235801    3932 logs.go:123] Gathering logs for kube-controller-manager [4829cb3c03a2] ...
	I0708 13:07:39.235810    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4829cb3c03a2"
	I0708 13:07:39.253064    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:07:39.253072    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:07:39.292545    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:07:39.292553    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:07:41.804593    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:07:46.802788    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:07:46.802985    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:07:46.818036    3932 logs.go:276] 1 containers: [063efc38d81d]
	I0708 13:07:46.818122    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:07:46.830678    3932 logs.go:276] 1 containers: [52eda3d8b3e7]
	I0708 13:07:46.830752    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:07:46.841287    3932 logs.go:276] 2 containers: [f585feadba35 12a2164c7181]
	I0708 13:07:46.841361    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:07:46.851841    3932 logs.go:276] 1 containers: [bb65792657e6]
	I0708 13:07:46.851913    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:07:46.862553    3932 logs.go:276] 1 containers: [814e848a6031]
	I0708 13:07:46.862632    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:07:46.874923    3932 logs.go:276] 1 containers: [4829cb3c03a2]
	I0708 13:07:46.874997    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:07:46.892275    3932 logs.go:276] 0 containers: []
	W0708 13:07:46.892293    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:07:46.892354    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:07:46.903247    3932 logs.go:276] 1 containers: [059ae42247ca]
	I0708 13:07:46.903262    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:07:46.903268    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:07:46.928518    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:07:46.928530    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:07:46.940203    3932 logs.go:123] Gathering logs for kube-apiserver [063efc38d81d] ...
	I0708 13:07:46.940217    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063efc38d81d"
	I0708 13:07:46.954723    3932 logs.go:123] Gathering logs for coredns [f585feadba35] ...
	I0708 13:07:46.954735    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f585feadba35"
	I0708 13:07:46.968196    3932 logs.go:123] Gathering logs for coredns [12a2164c7181] ...
	I0708 13:07:46.968208    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a2164c7181"
	I0708 13:07:46.980322    3932 logs.go:123] Gathering logs for etcd [52eda3d8b3e7] ...
	I0708 13:07:46.980332    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52eda3d8b3e7"
	I0708 13:07:46.994407    3932 logs.go:123] Gathering logs for kube-scheduler [bb65792657e6] ...
	I0708 13:07:46.994416    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65792657e6"
	I0708 13:07:47.008728    3932 logs.go:123] Gathering logs for kube-proxy [814e848a6031] ...
	I0708 13:07:47.008738    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 814e848a6031"
	I0708 13:07:47.020283    3932 logs.go:123] Gathering logs for kube-controller-manager [4829cb3c03a2] ...
	I0708 13:07:47.020294    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4829cb3c03a2"
	I0708 13:07:47.041826    3932 logs.go:123] Gathering logs for storage-provisioner [059ae42247ca] ...
	I0708 13:07:47.041839    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059ae42247ca"
	I0708 13:07:47.053504    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:07:47.053513    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:07:47.090532    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:07:47.090539    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:07:47.094658    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:07:47.094665    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:07:49.629913    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:07:54.630077    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:07:54.630280    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:07:54.656202    3932 logs.go:276] 1 containers: [063efc38d81d]
	I0708 13:07:54.656337    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:07:54.682311    3932 logs.go:276] 1 containers: [52eda3d8b3e7]
	I0708 13:07:54.682398    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:07:54.694517    3932 logs.go:276] 2 containers: [f585feadba35 12a2164c7181]
	I0708 13:07:54.694588    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:07:54.705423    3932 logs.go:276] 1 containers: [bb65792657e6]
	I0708 13:07:54.705491    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:07:54.716061    3932 logs.go:276] 1 containers: [814e848a6031]
	I0708 13:07:54.716124    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:07:54.726258    3932 logs.go:276] 1 containers: [4829cb3c03a2]
	I0708 13:07:54.726314    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:07:54.736072    3932 logs.go:276] 0 containers: []
	W0708 13:07:54.736084    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:07:54.736130    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:07:54.746708    3932 logs.go:276] 1 containers: [059ae42247ca]
	I0708 13:07:54.746722    3932 logs.go:123] Gathering logs for kube-proxy [814e848a6031] ...
	I0708 13:07:54.746728    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 814e848a6031"
	I0708 13:07:54.758063    3932 logs.go:123] Gathering logs for kube-controller-manager [4829cb3c03a2] ...
	I0708 13:07:54.758076    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4829cb3c03a2"
	I0708 13:07:54.775792    3932 logs.go:123] Gathering logs for storage-provisioner [059ae42247ca] ...
	I0708 13:07:54.775803    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059ae42247ca"
	I0708 13:07:54.791832    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:07:54.791841    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:07:54.816752    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:07:54.816760    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:07:54.856310    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:07:54.856322    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:07:54.895487    3932 logs.go:123] Gathering logs for etcd [52eda3d8b3e7] ...
	I0708 13:07:54.895498    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52eda3d8b3e7"
	I0708 13:07:54.912122    3932 logs.go:123] Gathering logs for kube-scheduler [bb65792657e6] ...
	I0708 13:07:54.912135    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65792657e6"
	I0708 13:07:54.927011    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:07:54.927021    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:07:54.938317    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:07:54.938328    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:07:54.942655    3932 logs.go:123] Gathering logs for kube-apiserver [063efc38d81d] ...
	I0708 13:07:54.942664    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063efc38d81d"
	I0708 13:07:54.956954    3932 logs.go:123] Gathering logs for coredns [f585feadba35] ...
	I0708 13:07:54.956964    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f585feadba35"
	I0708 13:07:54.969134    3932 logs.go:123] Gathering logs for coredns [12a2164c7181] ...
	I0708 13:07:54.969146    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a2164c7181"
	I0708 13:07:57.481960    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:08:02.483102    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:08:02.483522    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:08:02.521816    3932 logs.go:276] 1 containers: [063efc38d81d]
	I0708 13:08:02.521994    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:08:02.546323    3932 logs.go:276] 1 containers: [52eda3d8b3e7]
	I0708 13:08:02.546414    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:08:02.561512    3932 logs.go:276] 4 containers: [77c0e4961f2a 63e36cf27807 f585feadba35 12a2164c7181]
	I0708 13:08:02.561600    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:08:02.574101    3932 logs.go:276] 1 containers: [bb65792657e6]
	I0708 13:08:02.574174    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:08:02.584505    3932 logs.go:276] 1 containers: [814e848a6031]
	I0708 13:08:02.584578    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:08:02.595622    3932 logs.go:276] 1 containers: [4829cb3c03a2]
	I0708 13:08:02.595692    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:08:02.607501    3932 logs.go:276] 0 containers: []
	W0708 13:08:02.607520    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:08:02.607582    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:08:02.621754    3932 logs.go:276] 1 containers: [059ae42247ca]
	I0708 13:08:02.621775    3932 logs.go:123] Gathering logs for etcd [52eda3d8b3e7] ...
	I0708 13:08:02.621780    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52eda3d8b3e7"
	I0708 13:08:02.638067    3932 logs.go:123] Gathering logs for coredns [12a2164c7181] ...
	I0708 13:08:02.638079    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a2164c7181"
	I0708 13:08:02.650268    3932 logs.go:123] Gathering logs for kube-scheduler [bb65792657e6] ...
	I0708 13:08:02.650278    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65792657e6"
	I0708 13:08:02.665121    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:08:02.665131    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:08:02.705867    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:08:02.705879    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:08:02.742278    3932 logs.go:123] Gathering logs for coredns [f585feadba35] ...
	I0708 13:08:02.742292    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f585feadba35"
	I0708 13:08:02.753872    3932 logs.go:123] Gathering logs for storage-provisioner [059ae42247ca] ...
	I0708 13:08:02.753882    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059ae42247ca"
	I0708 13:08:02.765675    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:08:02.765686    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:08:02.770431    3932 logs.go:123] Gathering logs for kube-apiserver [063efc38d81d] ...
	I0708 13:08:02.770439    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063efc38d81d"
	I0708 13:08:02.784865    3932 logs.go:123] Gathering logs for coredns [63e36cf27807] ...
	I0708 13:08:02.784876    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e36cf27807"
	I0708 13:08:02.796186    3932 logs.go:123] Gathering logs for kube-controller-manager [4829cb3c03a2] ...
	I0708 13:08:02.796198    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4829cb3c03a2"
	I0708 13:08:02.813921    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:08:02.813931    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:08:02.838443    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:08:02.838454    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:08:02.849857    3932 logs.go:123] Gathering logs for coredns [77c0e4961f2a] ...
	I0708 13:08:02.849870    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77c0e4961f2a"
	I0708 13:08:02.863811    3932 logs.go:123] Gathering logs for kube-proxy [814e848a6031] ...
	I0708 13:08:02.863823    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 814e848a6031"
	I0708 13:08:05.377807    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:08:10.378264    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:08:10.378464    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:08:10.402838    3932 logs.go:276] 1 containers: [063efc38d81d]
	I0708 13:08:10.402957    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:08:10.422438    3932 logs.go:276] 1 containers: [52eda3d8b3e7]
	I0708 13:08:10.422531    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:08:10.434683    3932 logs.go:276] 4 containers: [77c0e4961f2a 63e36cf27807 f585feadba35 12a2164c7181]
	I0708 13:08:10.434752    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:08:10.445438    3932 logs.go:276] 1 containers: [bb65792657e6]
	I0708 13:08:10.445510    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:08:10.455982    3932 logs.go:276] 1 containers: [814e848a6031]
	I0708 13:08:10.456045    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:08:10.466556    3932 logs.go:276] 1 containers: [4829cb3c03a2]
	I0708 13:08:10.466619    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:08:10.476764    3932 logs.go:276] 0 containers: []
	W0708 13:08:10.476774    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:08:10.476830    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:08:10.487459    3932 logs.go:276] 1 containers: [059ae42247ca]
	I0708 13:08:10.487477    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:08:10.487482    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:08:10.499797    3932 logs.go:123] Gathering logs for coredns [63e36cf27807] ...
	I0708 13:08:10.499809    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e36cf27807"
	I0708 13:08:10.511523    3932 logs.go:123] Gathering logs for kube-scheduler [bb65792657e6] ...
	I0708 13:08:10.511534    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65792657e6"
	I0708 13:08:10.526362    3932 logs.go:123] Gathering logs for kube-proxy [814e848a6031] ...
	I0708 13:08:10.526374    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 814e848a6031"
	I0708 13:08:10.539410    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:08:10.539421    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:08:10.544151    3932 logs.go:123] Gathering logs for kube-apiserver [063efc38d81d] ...
	I0708 13:08:10.544160    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063efc38d81d"
	I0708 13:08:10.558425    3932 logs.go:123] Gathering logs for coredns [12a2164c7181] ...
	I0708 13:08:10.558435    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a2164c7181"
	I0708 13:08:10.571631    3932 logs.go:123] Gathering logs for etcd [52eda3d8b3e7] ...
	I0708 13:08:10.571642    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52eda3d8b3e7"
	I0708 13:08:10.585447    3932 logs.go:123] Gathering logs for storage-provisioner [059ae42247ca] ...
	I0708 13:08:10.585457    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059ae42247ca"
	I0708 13:08:10.607732    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:08:10.607745    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:08:10.635371    3932 logs.go:123] Gathering logs for coredns [f585feadba35] ...
	I0708 13:08:10.635397    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f585feadba35"
	I0708 13:08:10.649670    3932 logs.go:123] Gathering logs for kube-controller-manager [4829cb3c03a2] ...
	I0708 13:08:10.649686    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4829cb3c03a2"
	I0708 13:08:10.689853    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:08:10.689865    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:08:10.729270    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:08:10.729283    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:08:10.766803    3932 logs.go:123] Gathering logs for coredns [77c0e4961f2a] ...
	I0708 13:08:10.766815    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77c0e4961f2a"
	I0708 13:08:13.284937    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:08:18.286813    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:08:18.287232    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:08:18.318306    3932 logs.go:276] 1 containers: [063efc38d81d]
	I0708 13:08:18.318434    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:08:18.336831    3932 logs.go:276] 1 containers: [52eda3d8b3e7]
	I0708 13:08:18.336927    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:08:18.351470    3932 logs.go:276] 4 containers: [77c0e4961f2a 63e36cf27807 f585feadba35 12a2164c7181]
	I0708 13:08:18.351543    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:08:18.363073    3932 logs.go:276] 1 containers: [bb65792657e6]
	I0708 13:08:18.363130    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:08:18.374023    3932 logs.go:276] 1 containers: [814e848a6031]
	I0708 13:08:18.374090    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:08:18.385163    3932 logs.go:276] 1 containers: [4829cb3c03a2]
	I0708 13:08:18.385223    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:08:18.396281    3932 logs.go:276] 0 containers: []
	W0708 13:08:18.396292    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:08:18.396349    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:08:18.407377    3932 logs.go:276] 1 containers: [059ae42247ca]
	I0708 13:08:18.407395    3932 logs.go:123] Gathering logs for etcd [52eda3d8b3e7] ...
	I0708 13:08:18.407401    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52eda3d8b3e7"
	I0708 13:08:18.422224    3932 logs.go:123] Gathering logs for kube-scheduler [bb65792657e6] ...
	I0708 13:08:18.422234    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65792657e6"
	I0708 13:08:18.437311    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:08:18.437321    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:08:18.474008    3932 logs.go:123] Gathering logs for kube-proxy [814e848a6031] ...
	I0708 13:08:18.474018    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 814e848a6031"
	I0708 13:08:18.486369    3932 logs.go:123] Gathering logs for storage-provisioner [059ae42247ca] ...
	I0708 13:08:18.486382    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059ae42247ca"
	I0708 13:08:18.497414    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:08:18.497424    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:08:18.508856    3932 logs.go:123] Gathering logs for coredns [f585feadba35] ...
	I0708 13:08:18.508867    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f585feadba35"
	I0708 13:08:18.520238    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:08:18.520248    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:08:18.545582    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:08:18.545591    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:08:18.584562    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:08:18.584570    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:08:18.589067    3932 logs.go:123] Gathering logs for kube-apiserver [063efc38d81d] ...
	I0708 13:08:18.589073    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063efc38d81d"
	I0708 13:08:18.602968    3932 logs.go:123] Gathering logs for coredns [77c0e4961f2a] ...
	I0708 13:08:18.602981    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77c0e4961f2a"
	I0708 13:08:18.614936    3932 logs.go:123] Gathering logs for coredns [63e36cf27807] ...
	I0708 13:08:18.614947    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e36cf27807"
	I0708 13:08:18.626388    3932 logs.go:123] Gathering logs for coredns [12a2164c7181] ...
	I0708 13:08:18.626398    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a2164c7181"
	I0708 13:08:18.638110    3932 logs.go:123] Gathering logs for kube-controller-manager [4829cb3c03a2] ...
	I0708 13:08:18.638119    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4829cb3c03a2"
	I0708 13:08:21.157132    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:08:26.159238    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:08:26.159724    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:08:26.200094    3932 logs.go:276] 1 containers: [063efc38d81d]
	I0708 13:08:26.200238    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:08:26.225043    3932 logs.go:276] 1 containers: [52eda3d8b3e7]
	I0708 13:08:26.225136    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:08:26.244121    3932 logs.go:276] 4 containers: [77c0e4961f2a 63e36cf27807 f585feadba35 12a2164c7181]
	I0708 13:08:26.244199    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:08:26.256975    3932 logs.go:276] 1 containers: [bb65792657e6]
	I0708 13:08:26.257049    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:08:26.268158    3932 logs.go:276] 1 containers: [814e848a6031]
	I0708 13:08:26.268227    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:08:26.280044    3932 logs.go:276] 1 containers: [4829cb3c03a2]
	I0708 13:08:26.280120    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:08:26.291275    3932 logs.go:276] 0 containers: []
	W0708 13:08:26.291288    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:08:26.291353    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:08:26.303080    3932 logs.go:276] 1 containers: [059ae42247ca]
	I0708 13:08:26.303099    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:08:26.303105    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:08:26.339199    3932 logs.go:123] Gathering logs for coredns [f585feadba35] ...
	I0708 13:08:26.339212    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f585feadba35"
	I0708 13:08:26.351984    3932 logs.go:123] Gathering logs for storage-provisioner [059ae42247ca] ...
	I0708 13:08:26.351995    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059ae42247ca"
	I0708 13:08:26.364008    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:08:26.364023    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:08:26.387448    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:08:26.387455    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:08:26.424952    3932 logs.go:123] Gathering logs for kube-apiserver [063efc38d81d] ...
	I0708 13:08:26.424959    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063efc38d81d"
	I0708 13:08:26.439961    3932 logs.go:123] Gathering logs for coredns [77c0e4961f2a] ...
	I0708 13:08:26.439971    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77c0e4961f2a"
	I0708 13:08:26.452239    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:08:26.452249    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:08:26.456683    3932 logs.go:123] Gathering logs for coredns [63e36cf27807] ...
	I0708 13:08:26.456692    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e36cf27807"
	I0708 13:08:26.468452    3932 logs.go:123] Gathering logs for coredns [12a2164c7181] ...
	I0708 13:08:26.468465    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a2164c7181"
	I0708 13:08:26.480945    3932 logs.go:123] Gathering logs for kube-scheduler [bb65792657e6] ...
	I0708 13:08:26.480955    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65792657e6"
	I0708 13:08:26.495990    3932 logs.go:123] Gathering logs for kube-controller-manager [4829cb3c03a2] ...
	I0708 13:08:26.496000    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4829cb3c03a2"
	I0708 13:08:26.516055    3932 logs.go:123] Gathering logs for etcd [52eda3d8b3e7] ...
	I0708 13:08:26.516064    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52eda3d8b3e7"
	I0708 13:08:26.530530    3932 logs.go:123] Gathering logs for kube-proxy [814e848a6031] ...
	I0708 13:08:26.530539    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 814e848a6031"
	I0708 13:08:26.547338    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:08:26.547350    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:08:29.059225    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:08:34.059225    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:08:34.059372    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:08:34.079118    3932 logs.go:276] 1 containers: [063efc38d81d]
	I0708 13:08:34.079213    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:08:34.097539    3932 logs.go:276] 1 containers: [52eda3d8b3e7]
	I0708 13:08:34.097611    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:08:34.109240    3932 logs.go:276] 4 containers: [77c0e4961f2a 63e36cf27807 f585feadba35 12a2164c7181]
	I0708 13:08:34.109315    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:08:34.119703    3932 logs.go:276] 1 containers: [bb65792657e6]
	I0708 13:08:34.119773    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:08:34.130000    3932 logs.go:276] 1 containers: [814e848a6031]
	I0708 13:08:34.130066    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:08:34.140464    3932 logs.go:276] 1 containers: [4829cb3c03a2]
	I0708 13:08:34.140530    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:08:34.150432    3932 logs.go:276] 0 containers: []
	W0708 13:08:34.150443    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:08:34.150499    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:08:34.165448    3932 logs.go:276] 1 containers: [059ae42247ca]
	I0708 13:08:34.165465    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:08:34.165471    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:08:34.201958    3932 logs.go:123] Gathering logs for coredns [f585feadba35] ...
	I0708 13:08:34.201970    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f585feadba35"
	I0708 13:08:34.214467    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:08:34.214480    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:08:34.238664    3932 logs.go:123] Gathering logs for coredns [12a2164c7181] ...
	I0708 13:08:34.238674    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a2164c7181"
	I0708 13:08:34.250442    3932 logs.go:123] Gathering logs for kube-scheduler [bb65792657e6] ...
	I0708 13:08:34.250453    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65792657e6"
	I0708 13:08:34.265507    3932 logs.go:123] Gathering logs for kube-apiserver [063efc38d81d] ...
	I0708 13:08:34.265517    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063efc38d81d"
	I0708 13:08:34.280115    3932 logs.go:123] Gathering logs for etcd [52eda3d8b3e7] ...
	I0708 13:08:34.280125    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52eda3d8b3e7"
	I0708 13:08:34.293695    3932 logs.go:123] Gathering logs for coredns [77c0e4961f2a] ...
	I0708 13:08:34.293704    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77c0e4961f2a"
	I0708 13:08:34.305139    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:08:34.305149    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:08:34.343754    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:08:34.343764    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:08:34.348061    3932 logs.go:123] Gathering logs for storage-provisioner [059ae42247ca] ...
	I0708 13:08:34.348069    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059ae42247ca"
	I0708 13:08:34.359782    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:08:34.359793    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:08:34.371431    3932 logs.go:123] Gathering logs for coredns [63e36cf27807] ...
	I0708 13:08:34.371441    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e36cf27807"
	I0708 13:08:34.390159    3932 logs.go:123] Gathering logs for kube-proxy [814e848a6031] ...
	I0708 13:08:34.390168    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 814e848a6031"
	I0708 13:08:34.401539    3932 logs.go:123] Gathering logs for kube-controller-manager [4829cb3c03a2] ...
	I0708 13:08:34.401548    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4829cb3c03a2"
	I0708 13:08:36.925325    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:08:41.927344    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:08:41.927441    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:08:41.939042    3932 logs.go:276] 1 containers: [063efc38d81d]
	I0708 13:08:41.939110    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:08:41.949629    3932 logs.go:276] 1 containers: [52eda3d8b3e7]
	I0708 13:08:41.949687    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:08:41.960632    3932 logs.go:276] 4 containers: [77c0e4961f2a 63e36cf27807 f585feadba35 12a2164c7181]
	I0708 13:08:41.960702    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:08:41.971070    3932 logs.go:276] 1 containers: [bb65792657e6]
	I0708 13:08:41.971131    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:08:41.981520    3932 logs.go:276] 1 containers: [814e848a6031]
	I0708 13:08:41.981590    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:08:41.995857    3932 logs.go:276] 1 containers: [4829cb3c03a2]
	I0708 13:08:41.995920    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:08:42.006295    3932 logs.go:276] 0 containers: []
	W0708 13:08:42.006306    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:08:42.006366    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:08:42.016846    3932 logs.go:276] 1 containers: [059ae42247ca]
	I0708 13:08:42.016861    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:08:42.016866    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:08:42.051250    3932 logs.go:123] Gathering logs for kube-controller-manager [4829cb3c03a2] ...
	I0708 13:08:42.051261    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4829cb3c03a2"
	I0708 13:08:42.070173    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:08:42.070183    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:08:42.095278    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:08:42.095286    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:08:42.106778    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:08:42.106787    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:08:42.146311    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:08:42.146321    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:08:42.150741    3932 logs.go:123] Gathering logs for coredns [12a2164c7181] ...
	I0708 13:08:42.150750    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a2164c7181"
	I0708 13:08:42.162743    3932 logs.go:123] Gathering logs for etcd [52eda3d8b3e7] ...
	I0708 13:08:42.162756    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52eda3d8b3e7"
	I0708 13:08:42.180662    3932 logs.go:123] Gathering logs for coredns [77c0e4961f2a] ...
	I0708 13:08:42.180672    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77c0e4961f2a"
	I0708 13:08:42.191730    3932 logs.go:123] Gathering logs for coredns [63e36cf27807] ...
	I0708 13:08:42.191741    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e36cf27807"
	I0708 13:08:42.207451    3932 logs.go:123] Gathering logs for coredns [f585feadba35] ...
	I0708 13:08:42.207460    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f585feadba35"
	I0708 13:08:42.219107    3932 logs.go:123] Gathering logs for kube-apiserver [063efc38d81d] ...
	I0708 13:08:42.219118    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063efc38d81d"
	I0708 13:08:42.239211    3932 logs.go:123] Gathering logs for kube-scheduler [bb65792657e6] ...
	I0708 13:08:42.239221    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65792657e6"
	I0708 13:08:42.259269    3932 logs.go:123] Gathering logs for kube-proxy [814e848a6031] ...
	I0708 13:08:42.259277    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 814e848a6031"
	I0708 13:08:42.271662    3932 logs.go:123] Gathering logs for storage-provisioner [059ae42247ca] ...
	I0708 13:08:42.271674    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059ae42247ca"
	I0708 13:08:44.785537    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:08:49.787679    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:08:49.787825    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:08:49.805561    3932 logs.go:276] 1 containers: [063efc38d81d]
	I0708 13:08:49.805647    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:08:49.826859    3932 logs.go:276] 1 containers: [52eda3d8b3e7]
	I0708 13:08:49.826948    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:08:49.839679    3932 logs.go:276] 4 containers: [77c0e4961f2a 63e36cf27807 f585feadba35 12a2164c7181]
	I0708 13:08:49.839749    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:08:49.854056    3932 logs.go:276] 1 containers: [bb65792657e6]
	I0708 13:08:49.854130    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:08:49.864524    3932 logs.go:276] 1 containers: [814e848a6031]
	I0708 13:08:49.864593    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:08:49.874923    3932 logs.go:276] 1 containers: [4829cb3c03a2]
	I0708 13:08:49.874997    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:08:49.885607    3932 logs.go:276] 0 containers: []
	W0708 13:08:49.885619    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:08:49.885675    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:08:49.895891    3932 logs.go:276] 1 containers: [059ae42247ca]
	I0708 13:08:49.895910    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:08:49.895915    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:08:49.900475    3932 logs.go:123] Gathering logs for kube-scheduler [bb65792657e6] ...
	I0708 13:08:49.900486    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65792657e6"
	I0708 13:08:49.915670    3932 logs.go:123] Gathering logs for kube-controller-manager [4829cb3c03a2] ...
	I0708 13:08:49.915682    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4829cb3c03a2"
	I0708 13:08:49.933740    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:08:49.933749    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:08:49.959947    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:08:49.959957    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:08:49.976679    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:08:49.976694    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:08:50.015639    3932 logs.go:123] Gathering logs for coredns [63e36cf27807] ...
	I0708 13:08:50.015650    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e36cf27807"
	I0708 13:08:50.027405    3932 logs.go:123] Gathering logs for coredns [f585feadba35] ...
	I0708 13:08:50.027421    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f585feadba35"
	I0708 13:08:50.039594    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:08:50.039604    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:08:50.076694    3932 logs.go:123] Gathering logs for kube-apiserver [063efc38d81d] ...
	I0708 13:08:50.076702    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063efc38d81d"
	I0708 13:08:50.094919    3932 logs.go:123] Gathering logs for etcd [52eda3d8b3e7] ...
	I0708 13:08:50.094929    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52eda3d8b3e7"
	I0708 13:08:50.108957    3932 logs.go:123] Gathering logs for coredns [12a2164c7181] ...
	I0708 13:08:50.108969    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a2164c7181"
	I0708 13:08:50.120720    3932 logs.go:123] Gathering logs for storage-provisioner [059ae42247ca] ...
	I0708 13:08:50.120730    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059ae42247ca"
	I0708 13:08:50.131942    3932 logs.go:123] Gathering logs for coredns [77c0e4961f2a] ...
	I0708 13:08:50.131951    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77c0e4961f2a"
	I0708 13:08:50.143205    3932 logs.go:123] Gathering logs for kube-proxy [814e848a6031] ...
	I0708 13:08:50.143214    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 814e848a6031"
	I0708 13:08:52.656260    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:08:57.658782    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:08:57.658983    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:08:57.680382    3932 logs.go:276] 1 containers: [063efc38d81d]
	I0708 13:08:57.680463    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:08:57.693867    3932 logs.go:276] 1 containers: [52eda3d8b3e7]
	I0708 13:08:57.693937    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:08:57.705354    3932 logs.go:276] 4 containers: [77c0e4961f2a 63e36cf27807 f585feadba35 12a2164c7181]
	I0708 13:08:57.705419    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:08:57.715676    3932 logs.go:276] 1 containers: [bb65792657e6]
	I0708 13:08:57.715739    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:08:57.726251    3932 logs.go:276] 1 containers: [814e848a6031]
	I0708 13:08:57.726321    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:08:57.737169    3932 logs.go:276] 1 containers: [4829cb3c03a2]
	I0708 13:08:57.737237    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:08:57.747072    3932 logs.go:276] 0 containers: []
	W0708 13:08:57.747087    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:08:57.747148    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:08:57.757717    3932 logs.go:276] 1 containers: [059ae42247ca]
	I0708 13:08:57.757733    3932 logs.go:123] Gathering logs for coredns [77c0e4961f2a] ...
	I0708 13:08:57.757739    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77c0e4961f2a"
	I0708 13:08:57.770167    3932 logs.go:123] Gathering logs for kube-proxy [814e848a6031] ...
	I0708 13:08:57.770177    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 814e848a6031"
	I0708 13:08:57.782883    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:08:57.782892    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:08:57.824967    3932 logs.go:123] Gathering logs for etcd [52eda3d8b3e7] ...
	I0708 13:08:57.824979    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52eda3d8b3e7"
	I0708 13:08:57.844619    3932 logs.go:123] Gathering logs for coredns [f585feadba35] ...
	I0708 13:08:57.844626    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f585feadba35"
	I0708 13:08:57.862046    3932 logs.go:123] Gathering logs for kube-controller-manager [4829cb3c03a2] ...
	I0708 13:08:57.862058    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4829cb3c03a2"
	I0708 13:08:57.891637    3932 logs.go:123] Gathering logs for storage-provisioner [059ae42247ca] ...
	I0708 13:08:57.891649    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059ae42247ca"
	I0708 13:08:57.908125    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:08:57.908138    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:08:57.947206    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:08:57.947224    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:08:57.951831    3932 logs.go:123] Gathering logs for coredns [12a2164c7181] ...
	I0708 13:08:57.951840    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a2164c7181"
	I0708 13:08:57.963837    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:08:57.963846    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:08:57.988243    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:08:57.988254    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:08:57.999700    3932 logs.go:123] Gathering logs for kube-apiserver [063efc38d81d] ...
	I0708 13:08:57.999711    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063efc38d81d"
	I0708 13:08:58.014028    3932 logs.go:123] Gathering logs for coredns [63e36cf27807] ...
	I0708 13:08:58.014042    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e36cf27807"
	I0708 13:08:58.025833    3932 logs.go:123] Gathering logs for kube-scheduler [bb65792657e6] ...
	I0708 13:08:58.025847    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65792657e6"
	I0708 13:09:00.542187    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:09:05.544532    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:09:05.544624    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:09:05.555957    3932 logs.go:276] 1 containers: [063efc38d81d]
	I0708 13:09:05.556040    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:09:05.567003    3932 logs.go:276] 1 containers: [52eda3d8b3e7]
	I0708 13:09:05.567086    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:09:05.578178    3932 logs.go:276] 4 containers: [77c0e4961f2a 63e36cf27807 f585feadba35 12a2164c7181]
	I0708 13:09:05.578261    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:09:05.589552    3932 logs.go:276] 1 containers: [bb65792657e6]
	I0708 13:09:05.589622    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:09:05.605010    3932 logs.go:276] 1 containers: [814e848a6031]
	I0708 13:09:05.605087    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:09:05.615992    3932 logs.go:276] 1 containers: [4829cb3c03a2]
	I0708 13:09:05.616065    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:09:05.627418    3932 logs.go:276] 0 containers: []
	W0708 13:09:05.627429    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:09:05.627494    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:09:05.638528    3932 logs.go:276] 1 containers: [059ae42247ca]
	I0708 13:09:05.638547    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:09:05.638552    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:09:05.678632    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:09:05.678642    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:09:05.683253    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:09:05.683262    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:09:05.717562    3932 logs.go:123] Gathering logs for kube-apiserver [063efc38d81d] ...
	I0708 13:09:05.717575    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063efc38d81d"
	I0708 13:09:05.731785    3932 logs.go:123] Gathering logs for etcd [52eda3d8b3e7] ...
	I0708 13:09:05.731797    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52eda3d8b3e7"
	I0708 13:09:05.749315    3932 logs.go:123] Gathering logs for kube-scheduler [bb65792657e6] ...
	I0708 13:09:05.749327    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65792657e6"
	I0708 13:09:05.764050    3932 logs.go:123] Gathering logs for coredns [77c0e4961f2a] ...
	I0708 13:09:05.764064    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77c0e4961f2a"
	I0708 13:09:05.779096    3932 logs.go:123] Gathering logs for kube-controller-manager [4829cb3c03a2] ...
	I0708 13:09:05.779108    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4829cb3c03a2"
	I0708 13:09:05.797988    3932 logs.go:123] Gathering logs for storage-provisioner [059ae42247ca] ...
	I0708 13:09:05.798003    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059ae42247ca"
	I0708 13:09:05.810553    3932 logs.go:123] Gathering logs for coredns [63e36cf27807] ...
	I0708 13:09:05.810567    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e36cf27807"
	I0708 13:09:05.827456    3932 logs.go:123] Gathering logs for kube-proxy [814e848a6031] ...
	I0708 13:09:05.827468    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 814e848a6031"
	I0708 13:09:05.839671    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:09:05.839683    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:09:05.866258    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:09:05.866280    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:09:05.879122    3932 logs.go:123] Gathering logs for coredns [f585feadba35] ...
	I0708 13:09:05.879136    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f585feadba35"
	I0708 13:09:05.892818    3932 logs.go:123] Gathering logs for coredns [12a2164c7181] ...
	I0708 13:09:05.892830    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a2164c7181"
	I0708 13:09:08.407273    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:09:13.407857    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:09:13.408061    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:09:13.427113    3932 logs.go:276] 1 containers: [063efc38d81d]
	I0708 13:09:13.427204    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:09:13.441206    3932 logs.go:276] 1 containers: [52eda3d8b3e7]
	I0708 13:09:13.441279    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:09:13.454846    3932 logs.go:276] 4 containers: [77c0e4961f2a 63e36cf27807 f585feadba35 12a2164c7181]
	I0708 13:09:13.454921    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:09:13.465920    3932 logs.go:276] 1 containers: [bb65792657e6]
	I0708 13:09:13.465980    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:09:13.476438    3932 logs.go:276] 1 containers: [814e848a6031]
	I0708 13:09:13.476504    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:09:13.487479    3932 logs.go:276] 1 containers: [4829cb3c03a2]
	I0708 13:09:13.487546    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:09:13.498670    3932 logs.go:276] 0 containers: []
	W0708 13:09:13.498680    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:09:13.498732    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:09:13.509468    3932 logs.go:276] 1 containers: [059ae42247ca]
	I0708 13:09:13.509485    3932 logs.go:123] Gathering logs for kube-scheduler [bb65792657e6] ...
	I0708 13:09:13.509490    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65792657e6"
	I0708 13:09:13.524616    3932 logs.go:123] Gathering logs for kube-controller-manager [4829cb3c03a2] ...
	I0708 13:09:13.524624    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4829cb3c03a2"
	I0708 13:09:13.543213    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:09:13.543223    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:09:13.583686    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:09:13.583699    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:09:13.588736    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:09:13.588745    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:09:13.624110    3932 logs.go:123] Gathering logs for kube-apiserver [063efc38d81d] ...
	I0708 13:09:13.624124    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063efc38d81d"
	I0708 13:09:13.638681    3932 logs.go:123] Gathering logs for coredns [12a2164c7181] ...
	I0708 13:09:13.638692    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a2164c7181"
	I0708 13:09:13.653463    3932 logs.go:123] Gathering logs for coredns [63e36cf27807] ...
	I0708 13:09:13.653474    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e36cf27807"
	I0708 13:09:13.665490    3932 logs.go:123] Gathering logs for coredns [f585feadba35] ...
	I0708 13:09:13.665501    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f585feadba35"
	I0708 13:09:13.677313    3932 logs.go:123] Gathering logs for storage-provisioner [059ae42247ca] ...
	I0708 13:09:13.677324    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059ae42247ca"
	I0708 13:09:13.688616    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:09:13.688627    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:09:13.712393    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:09:13.712401    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:09:13.723607    3932 logs.go:123] Gathering logs for etcd [52eda3d8b3e7] ...
	I0708 13:09:13.723617    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52eda3d8b3e7"
	I0708 13:09:13.737619    3932 logs.go:123] Gathering logs for coredns [77c0e4961f2a] ...
	I0708 13:09:13.737629    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77c0e4961f2a"
	I0708 13:09:13.749272    3932 logs.go:123] Gathering logs for kube-proxy [814e848a6031] ...
	I0708 13:09:13.749282    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 814e848a6031"
	I0708 13:09:16.269385    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:09:21.271611    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:09:21.271725    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:09:21.283748    3932 logs.go:276] 1 containers: [063efc38d81d]
	I0708 13:09:21.283823    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:09:21.294798    3932 logs.go:276] 1 containers: [52eda3d8b3e7]
	I0708 13:09:21.294866    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:09:21.305454    3932 logs.go:276] 4 containers: [77c0e4961f2a 63e36cf27807 f585feadba35 12a2164c7181]
	I0708 13:09:21.305521    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:09:21.315946    3932 logs.go:276] 1 containers: [bb65792657e6]
	I0708 13:09:21.316015    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:09:21.326798    3932 logs.go:276] 1 containers: [814e848a6031]
	I0708 13:09:21.326862    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:09:21.337473    3932 logs.go:276] 1 containers: [4829cb3c03a2]
	I0708 13:09:21.337534    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:09:21.348496    3932 logs.go:276] 0 containers: []
	W0708 13:09:21.348507    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:09:21.348562    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:09:21.363022    3932 logs.go:276] 1 containers: [059ae42247ca]
	I0708 13:09:21.363040    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:09:21.363045    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:09:21.389616    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:09:21.389631    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:09:21.429205    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:09:21.429217    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:09:21.434271    3932 logs.go:123] Gathering logs for coredns [f585feadba35] ...
	I0708 13:09:21.434278    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f585feadba35"
	I0708 13:09:21.446571    3932 logs.go:123] Gathering logs for kube-scheduler [bb65792657e6] ...
	I0708 13:09:21.446581    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65792657e6"
	I0708 13:09:21.461095    3932 logs.go:123] Gathering logs for storage-provisioner [059ae42247ca] ...
	I0708 13:09:21.461106    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059ae42247ca"
	I0708 13:09:21.473203    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:09:21.473213    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:09:21.485455    3932 logs.go:123] Gathering logs for coredns [12a2164c7181] ...
	I0708 13:09:21.485469    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a2164c7181"
	I0708 13:09:21.497546    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:09:21.497557    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:09:21.536979    3932 logs.go:123] Gathering logs for etcd [52eda3d8b3e7] ...
	I0708 13:09:21.536991    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52eda3d8b3e7"
	I0708 13:09:21.555342    3932 logs.go:123] Gathering logs for coredns [77c0e4961f2a] ...
	I0708 13:09:21.555353    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77c0e4961f2a"
	I0708 13:09:21.567526    3932 logs.go:123] Gathering logs for coredns [63e36cf27807] ...
	I0708 13:09:21.567537    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e36cf27807"
	I0708 13:09:21.580355    3932 logs.go:123] Gathering logs for kube-apiserver [063efc38d81d] ...
	I0708 13:09:21.580366    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063efc38d81d"
	I0708 13:09:21.596619    3932 logs.go:123] Gathering logs for kube-proxy [814e848a6031] ...
	I0708 13:09:21.596630    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 814e848a6031"
	I0708 13:09:21.609753    3932 logs.go:123] Gathering logs for kube-controller-manager [4829cb3c03a2] ...
	I0708 13:09:21.609764    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4829cb3c03a2"
	I0708 13:09:24.138744    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:09:29.140813    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:09:29.141053    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:09:29.157337    3932 logs.go:276] 1 containers: [063efc38d81d]
	I0708 13:09:29.157419    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:09:29.170139    3932 logs.go:276] 1 containers: [52eda3d8b3e7]
	I0708 13:09:29.170203    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:09:29.181076    3932 logs.go:276] 4 containers: [77c0e4961f2a 63e36cf27807 f585feadba35 12a2164c7181]
	I0708 13:09:29.181148    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:09:29.192192    3932 logs.go:276] 1 containers: [bb65792657e6]
	I0708 13:09:29.192250    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:09:29.202576    3932 logs.go:276] 1 containers: [814e848a6031]
	I0708 13:09:29.202643    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:09:29.212865    3932 logs.go:276] 1 containers: [4829cb3c03a2]
	I0708 13:09:29.212930    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:09:29.223989    3932 logs.go:276] 0 containers: []
	W0708 13:09:29.224002    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:09:29.224062    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:09:29.238451    3932 logs.go:276] 1 containers: [059ae42247ca]
	I0708 13:09:29.238468    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:09:29.238474    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:09:29.277355    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:09:29.277363    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:09:29.282460    3932 logs.go:123] Gathering logs for kube-apiserver [063efc38d81d] ...
	I0708 13:09:29.282470    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063efc38d81d"
	I0708 13:09:29.296705    3932 logs.go:123] Gathering logs for coredns [77c0e4961f2a] ...
	I0708 13:09:29.296714    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77c0e4961f2a"
	I0708 13:09:29.310296    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:09:29.310306    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:09:29.334626    3932 logs.go:123] Gathering logs for coredns [63e36cf27807] ...
	I0708 13:09:29.334633    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e36cf27807"
	I0708 13:09:29.345720    3932 logs.go:123] Gathering logs for kube-scheduler [bb65792657e6] ...
	I0708 13:09:29.345730    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65792657e6"
	I0708 13:09:29.364232    3932 logs.go:123] Gathering logs for kube-proxy [814e848a6031] ...
	I0708 13:09:29.364242    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 814e848a6031"
	I0708 13:09:29.376040    3932 logs.go:123] Gathering logs for kube-controller-manager [4829cb3c03a2] ...
	I0708 13:09:29.376055    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4829cb3c03a2"
	I0708 13:09:29.394130    3932 logs.go:123] Gathering logs for coredns [f585feadba35] ...
	I0708 13:09:29.394145    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f585feadba35"
	I0708 13:09:29.411068    3932 logs.go:123] Gathering logs for storage-provisioner [059ae42247ca] ...
	I0708 13:09:29.411079    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059ae42247ca"
	I0708 13:09:29.422508    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:09:29.422519    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:09:29.434942    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:09:29.434953    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:09:29.470003    3932 logs.go:123] Gathering logs for etcd [52eda3d8b3e7] ...
	I0708 13:09:29.470015    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52eda3d8b3e7"
	I0708 13:09:29.483983    3932 logs.go:123] Gathering logs for coredns [12a2164c7181] ...
	I0708 13:09:29.483995    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a2164c7181"
	I0708 13:09:31.997359    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:09:36.999526    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:09:36.999692    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:09:37.013362    3932 logs.go:276] 1 containers: [063efc38d81d]
	I0708 13:09:37.013436    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:09:37.024077    3932 logs.go:276] 1 containers: [52eda3d8b3e7]
	I0708 13:09:37.024149    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:09:37.034678    3932 logs.go:276] 4 containers: [77c0e4961f2a 63e36cf27807 f585feadba35 12a2164c7181]
	I0708 13:09:37.034751    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:09:37.045629    3932 logs.go:276] 1 containers: [bb65792657e6]
	I0708 13:09:37.045697    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:09:37.057138    3932 logs.go:276] 1 containers: [814e848a6031]
	I0708 13:09:37.057205    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:09:37.072442    3932 logs.go:276] 1 containers: [4829cb3c03a2]
	I0708 13:09:37.072510    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:09:37.083529    3932 logs.go:276] 0 containers: []
	W0708 13:09:37.083545    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:09:37.083607    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:09:37.095260    3932 logs.go:276] 1 containers: [059ae42247ca]
	I0708 13:09:37.095278    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:09:37.095284    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:09:37.100150    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:09:37.100160    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:09:37.136704    3932 logs.go:123] Gathering logs for etcd [52eda3d8b3e7] ...
	I0708 13:09:37.136717    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52eda3d8b3e7"
	I0708 13:09:37.151626    3932 logs.go:123] Gathering logs for kube-scheduler [bb65792657e6] ...
	I0708 13:09:37.151639    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65792657e6"
	I0708 13:09:37.166190    3932 logs.go:123] Gathering logs for kube-controller-manager [4829cb3c03a2] ...
	I0708 13:09:37.166202    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4829cb3c03a2"
	I0708 13:09:37.184348    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:09:37.184363    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:09:37.222063    3932 logs.go:123] Gathering logs for coredns [63e36cf27807] ...
	I0708 13:09:37.222072    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e36cf27807"
	I0708 13:09:37.234549    3932 logs.go:123] Gathering logs for coredns [12a2164c7181] ...
	I0708 13:09:37.234557    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a2164c7181"
	I0708 13:09:37.246258    3932 logs.go:123] Gathering logs for storage-provisioner [059ae42247ca] ...
	I0708 13:09:37.246269    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059ae42247ca"
	I0708 13:09:37.257792    3932 logs.go:123] Gathering logs for kube-apiserver [063efc38d81d] ...
	I0708 13:09:37.257802    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063efc38d81d"
	I0708 13:09:37.272291    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:09:37.272303    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:09:37.284369    3932 logs.go:123] Gathering logs for coredns [77c0e4961f2a] ...
	I0708 13:09:37.284381    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77c0e4961f2a"
	I0708 13:09:37.295778    3932 logs.go:123] Gathering logs for coredns [f585feadba35] ...
	I0708 13:09:37.295787    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f585feadba35"
	I0708 13:09:37.307710    3932 logs.go:123] Gathering logs for kube-proxy [814e848a6031] ...
	I0708 13:09:37.307720    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 814e848a6031"
	I0708 13:09:37.319091    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:09:37.319100    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:09:39.845024    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:09:44.846993    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:09:44.850389    3932 out.go:177] 
	W0708 13:09:44.854378    3932 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0708 13:09:44.854385    3932 out.go:239] * 
	* 
	W0708 13:09:44.855008    3932 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 13:09:44.870212    3932 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-129000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-07-08 13:09:44.970157 -0700 PDT m=+2484.457874918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-129000 -n running-upgrade-129000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-129000 -n running-upgrade-129000: exit status 2 (15.660914416s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-129000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-803000          | force-systemd-flag-803000 | jenkins | v1.33.1 | 08 Jul 24 12:59 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-827000              | force-systemd-env-827000  | jenkins | v1.33.1 | 08 Jul 24 12:59 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-827000           | force-systemd-env-827000  | jenkins | v1.33.1 | 08 Jul 24 12:59 PDT | 08 Jul 24 12:59 PDT |
	| start   | -p docker-flags-537000                | docker-flags-537000       | jenkins | v1.33.1 | 08 Jul 24 12:59 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-803000             | force-systemd-flag-803000 | jenkins | v1.33.1 | 08 Jul 24 13:00 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-803000          | force-systemd-flag-803000 | jenkins | v1.33.1 | 08 Jul 24 13:00 PDT | 08 Jul 24 13:00 PDT |
	| start   | -p cert-expiration-546000             | cert-expiration-546000    | jenkins | v1.33.1 | 08 Jul 24 13:00 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-537000 ssh               | docker-flags-537000       | jenkins | v1.33.1 | 08 Jul 24 13:00 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-537000 ssh               | docker-flags-537000       | jenkins | v1.33.1 | 08 Jul 24 13:00 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-537000                | docker-flags-537000       | jenkins | v1.33.1 | 08 Jul 24 13:00 PDT | 08 Jul 24 13:00 PDT |
	| start   | -p cert-options-750000                | cert-options-750000       | jenkins | v1.33.1 | 08 Jul 24 13:00 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-750000 ssh               | cert-options-750000       | jenkins | v1.33.1 | 08 Jul 24 13:00 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-750000 -- sudo        | cert-options-750000       | jenkins | v1.33.1 | 08 Jul 24 13:00 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-750000                | cert-options-750000       | jenkins | v1.33.1 | 08 Jul 24 13:00 PDT | 08 Jul 24 13:00 PDT |
	| start   | -p running-upgrade-129000             | minikube                  | jenkins | v1.26.0 | 08 Jul 24 13:00 PDT | 08 Jul 24 13:01 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-129000             | running-upgrade-129000    | jenkins | v1.33.1 | 08 Jul 24 13:01 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-546000             | cert-expiration-546000    | jenkins | v1.33.1 | 08 Jul 24 13:03 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-546000             | cert-expiration-546000    | jenkins | v1.33.1 | 08 Jul 24 13:03 PDT | 08 Jul 24 13:03 PDT |
	| start   | -p kubernetes-upgrade-644000          | kubernetes-upgrade-644000 | jenkins | v1.33.1 | 08 Jul 24 13:03 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-644000          | kubernetes-upgrade-644000 | jenkins | v1.33.1 | 08 Jul 24 13:03 PDT | 08 Jul 24 13:03 PDT |
	| start   | -p kubernetes-upgrade-644000          | kubernetes-upgrade-644000 | jenkins | v1.33.1 | 08 Jul 24 13:03 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-644000          | kubernetes-upgrade-644000 | jenkins | v1.33.1 | 08 Jul 24 13:03 PDT | 08 Jul 24 13:03 PDT |
	| start   | -p stopped-upgrade-170000             | minikube                  | jenkins | v1.26.0 | 08 Jul 24 13:03 PDT | 08 Jul 24 13:04 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-170000 stop           | minikube                  | jenkins | v1.26.0 | 08 Jul 24 13:04 PDT | 08 Jul 24 13:04 PDT |
	| start   | -p stopped-upgrade-170000             | stopped-upgrade-170000    | jenkins | v1.33.1 | 08 Jul 24 13:04 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/08 13:04:29
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0708 13:04:29.633274    4087 out.go:291] Setting OutFile to fd 1 ...
	I0708 13:04:29.633482    4087 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:04:29.633486    4087 out.go:304] Setting ErrFile to fd 2...
	I0708 13:04:29.633489    4087 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:04:29.633654    4087 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 13:04:29.634864    4087 out.go:298] Setting JSON to false
	I0708 13:04:29.654058    4087 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3837,"bootTime":1720465232,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0708 13:04:29.654129    4087 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0708 13:04:29.659274    4087 out.go:177] * [stopped-upgrade-170000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0708 13:04:29.667245    4087 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 13:04:29.667272    4087 notify.go:220] Checking for updates...
	I0708 13:04:29.674235    4087 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 13:04:29.677251    4087 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0708 13:04:29.680315    4087 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 13:04:29.683161    4087 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	I0708 13:04:29.686276    4087 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 13:04:29.689572    4087 config.go:182] Loaded profile config "stopped-upgrade-170000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0708 13:04:29.691136    4087 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0708 13:04:29.694245    4087 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 13:04:29.698358    4087 out.go:177] * Using the qemu2 driver based on existing profile
	I0708 13:04:29.703216    4087 start.go:297] selected driver: qemu2
	I0708 13:04:29.703222    4087 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-170000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50600 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-170000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0708 13:04:29.703271    4087 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 13:04:29.706066    4087 cni.go:84] Creating CNI manager for ""
	I0708 13:04:29.706085    4087 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0708 13:04:29.706117    4087 start.go:340] cluster config:
	{Name:stopped-upgrade-170000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50600 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-170000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0708 13:04:29.706171    4087 iso.go:125] acquiring lock: {Name:mk0270d312faa6a295feea241390baaf586d8510 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 13:04:29.713197    4087 out.go:177] * Starting "stopped-upgrade-170000" primary control-plane node in "stopped-upgrade-170000" cluster
	I0708 13:04:29.719302    4087 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0708 13:04:29.719326    4087 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0708 13:04:29.719332    4087 cache.go:56] Caching tarball of preloaded images
	I0708 13:04:29.719399    4087 preload.go:173] Found /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0708 13:04:29.719405    4087 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0708 13:04:29.719453    4087 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/stopped-upgrade-170000/config.json ...
	I0708 13:04:29.719733    4087 start.go:360] acquireMachinesLock for stopped-upgrade-170000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 13:04:29.719767    4087 start.go:364] duration metric: took 27.459µs to acquireMachinesLock for "stopped-upgrade-170000"
	I0708 13:04:29.719775    4087 start.go:96] Skipping create...Using existing machine configuration
	I0708 13:04:29.719780    4087 fix.go:54] fixHost starting: 
	I0708 13:04:29.719890    4087 fix.go:112] recreateIfNeeded on stopped-upgrade-170000: state=Stopped err=<nil>
	W0708 13:04:29.719898    4087 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 13:04:29.724240    4087 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-170000" ...
	I0708 13:04:27.449337    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:04:29.732310    4087 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/stopped-upgrade-170000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/stopped-upgrade-170000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/stopped-upgrade-170000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50565-:22,hostfwd=tcp::50566-:2376,hostname=stopped-upgrade-170000 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/stopped-upgrade-170000/disk.qcow2
	I0708 13:04:29.777201    4087 main.go:141] libmachine: STDOUT: 
	I0708 13:04:29.777234    4087 main.go:141] libmachine: STDERR: 
	I0708 13:04:29.777241    4087 main.go:141] libmachine: Waiting for VM to start (ssh -p 50565 docker@127.0.0.1)...
	I0708 13:04:32.452195    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:04:32.452733    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:04:32.490471    3932 logs.go:276] 2 containers: [b73a0038804f 27a315e0e1d2]
	I0708 13:04:32.490600    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:04:32.509538    3932 logs.go:276] 2 containers: [995ff223681d 663e148eab2d]
	I0708 13:04:32.509635    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:04:32.522282    3932 logs.go:276] 1 containers: [632152eccf25]
	I0708 13:04:32.522356    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:04:32.533232    3932 logs.go:276] 2 containers: [caa2559e6578 572a7b23b33d]
	I0708 13:04:32.533316    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:04:32.544000    3932 logs.go:276] 1 containers: [7fc889e2cef6]
	I0708 13:04:32.544066    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:04:32.554737    3932 logs.go:276] 2 containers: [364e7abdea37 ab6316c47d83]
	I0708 13:04:32.554809    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:04:32.565177    3932 logs.go:276] 0 containers: []
	W0708 13:04:32.565188    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:04:32.565246    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:04:32.576064    3932 logs.go:276] 2 containers: [aed1a674fd24 374ea76eccc3]
	I0708 13:04:32.576083    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:04:32.576088    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:04:32.580748    3932 logs.go:123] Gathering logs for etcd [995ff223681d] ...
	I0708 13:04:32.580755    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995ff223681d"
	I0708 13:04:32.594444    3932 logs.go:123] Gathering logs for kube-scheduler [caa2559e6578] ...
	I0708 13:04:32.594457    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa2559e6578"
	I0708 13:04:32.614099    3932 logs.go:123] Gathering logs for kube-scheduler [572a7b23b33d] ...
	I0708 13:04:32.614114    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 572a7b23b33d"
	I0708 13:04:32.629516    3932 logs.go:123] Gathering logs for kube-proxy [7fc889e2cef6] ...
	I0708 13:04:32.629526    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc889e2cef6"
	I0708 13:04:32.645455    3932 logs.go:123] Gathering logs for storage-provisioner [374ea76eccc3] ...
	I0708 13:04:32.645465    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374ea76eccc3"
	I0708 13:04:32.656231    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:04:32.656241    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:04:32.698024    3932 logs.go:123] Gathering logs for coredns [632152eccf25] ...
	I0708 13:04:32.698039    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 632152eccf25"
	I0708 13:04:32.709532    3932 logs.go:123] Gathering logs for kube-controller-manager [ab6316c47d83] ...
	I0708 13:04:32.709543    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab6316c47d83"
	I0708 13:04:32.723928    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:04:32.723939    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:04:32.748118    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:04:32.748125    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:04:32.759956    3932 logs.go:123] Gathering logs for storage-provisioner [aed1a674fd24] ...
	I0708 13:04:32.759968    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed1a674fd24"
	I0708 13:04:32.771839    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:04:32.771851    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:04:32.811550    3932 logs.go:123] Gathering logs for kube-apiserver [b73a0038804f] ...
	I0708 13:04:32.811558    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b73a0038804f"
	I0708 13:04:32.828251    3932 logs.go:123] Gathering logs for kube-apiserver [27a315e0e1d2] ...
	I0708 13:04:32.828260    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a315e0e1d2"
	I0708 13:04:32.840189    3932 logs.go:123] Gathering logs for etcd [663e148eab2d] ...
	I0708 13:04:32.840202    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 663e148eab2d"
	I0708 13:04:32.851159    3932 logs.go:123] Gathering logs for kube-controller-manager [364e7abdea37] ...
	I0708 13:04:32.851171    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 364e7abdea37"
	I0708 13:04:35.373897    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:04:40.376119    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:04:40.376269    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:04:40.388301    3932 logs.go:276] 2 containers: [b73a0038804f 27a315e0e1d2]
	I0708 13:04:40.388378    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:04:40.400329    3932 logs.go:276] 2 containers: [995ff223681d 663e148eab2d]
	I0708 13:04:40.400399    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:04:40.411891    3932 logs.go:276] 1 containers: [632152eccf25]
	I0708 13:04:40.411963    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:04:40.423317    3932 logs.go:276] 2 containers: [caa2559e6578 572a7b23b33d]
	I0708 13:04:40.423391    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:04:40.434046    3932 logs.go:276] 1 containers: [7fc889e2cef6]
	I0708 13:04:40.434116    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:04:40.445002    3932 logs.go:276] 2 containers: [364e7abdea37 ab6316c47d83]
	I0708 13:04:40.445070    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:04:40.456767    3932 logs.go:276] 0 containers: []
	W0708 13:04:40.456779    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:04:40.456837    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:04:40.469439    3932 logs.go:276] 2 containers: [aed1a674fd24 374ea76eccc3]
	I0708 13:04:40.469454    3932 logs.go:123] Gathering logs for kube-proxy [7fc889e2cef6] ...
	I0708 13:04:40.469466    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc889e2cef6"
	I0708 13:04:40.485111    3932 logs.go:123] Gathering logs for storage-provisioner [aed1a674fd24] ...
	I0708 13:04:40.485122    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed1a674fd24"
	I0708 13:04:40.501235    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:04:40.501245    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:04:40.535872    3932 logs.go:123] Gathering logs for kube-apiserver [b73a0038804f] ...
	I0708 13:04:40.535884    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b73a0038804f"
	I0708 13:04:40.549814    3932 logs.go:123] Gathering logs for etcd [663e148eab2d] ...
	I0708 13:04:40.549826    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 663e148eab2d"
	I0708 13:04:40.565425    3932 logs.go:123] Gathering logs for coredns [632152eccf25] ...
	I0708 13:04:40.565439    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 632152eccf25"
	I0708 13:04:40.576917    3932 logs.go:123] Gathering logs for kube-scheduler [caa2559e6578] ...
	I0708 13:04:40.576930    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa2559e6578"
	I0708 13:04:40.590032    3932 logs.go:123] Gathering logs for kube-controller-manager [364e7abdea37] ...
	I0708 13:04:40.590045    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 364e7abdea37"
	I0708 13:04:40.610284    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:04:40.610304    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:04:40.637573    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:04:40.637595    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:04:40.683137    3932 logs.go:123] Gathering logs for kube-apiserver [27a315e0e1d2] ...
	I0708 13:04:40.683155    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a315e0e1d2"
	I0708 13:04:40.702446    3932 logs.go:123] Gathering logs for etcd [995ff223681d] ...
	I0708 13:04:40.702459    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995ff223681d"
	I0708 13:04:40.723422    3932 logs.go:123] Gathering logs for storage-provisioner [374ea76eccc3] ...
	I0708 13:04:40.723446    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374ea76eccc3"
	I0708 13:04:40.736487    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:04:40.736498    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:04:40.750243    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:04:40.750254    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:04:40.755954    3932 logs.go:123] Gathering logs for kube-scheduler [572a7b23b33d] ...
	I0708 13:04:40.755965    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 572a7b23b33d"
	I0708 13:04:40.774893    3932 logs.go:123] Gathering logs for kube-controller-manager [ab6316c47d83] ...
	I0708 13:04:40.774906    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab6316c47d83"
	I0708 13:04:43.292541    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:04:49.625945    4087 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/stopped-upgrade-170000/config.json ...
	I0708 13:04:49.626782    4087 machine.go:94] provisionDockerMachine start ...
	I0708 13:04:49.627043    4087 main.go:141] libmachine: Using SSH client type: native
	I0708 13:04:49.627598    4087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10460e920] 0x104611180 <nil>  [] 0s} localhost 50565 <nil> <nil>}
	I0708 13:04:49.627614    4087 main.go:141] libmachine: About to run SSH command:
	hostname
	I0708 13:04:48.294843    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:04:48.295122    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:04:48.330489    3932 logs.go:276] 2 containers: [b73a0038804f 27a315e0e1d2]
	I0708 13:04:48.330591    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:04:48.347490    3932 logs.go:276] 2 containers: [995ff223681d 663e148eab2d]
	I0708 13:04:48.347582    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:04:48.362136    3932 logs.go:276] 1 containers: [632152eccf25]
	I0708 13:04:48.362220    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:04:48.373510    3932 logs.go:276] 2 containers: [caa2559e6578 572a7b23b33d]
	I0708 13:04:48.373588    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:04:48.384041    3932 logs.go:276] 1 containers: [7fc889e2cef6]
	I0708 13:04:48.384108    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:04:48.394746    3932 logs.go:276] 2 containers: [364e7abdea37 ab6316c47d83]
	I0708 13:04:48.394806    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:04:48.405111    3932 logs.go:276] 0 containers: []
	W0708 13:04:48.405122    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:04:48.405179    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:04:48.416123    3932 logs.go:276] 2 containers: [aed1a674fd24 374ea76eccc3]
	I0708 13:04:48.416142    3932 logs.go:123] Gathering logs for etcd [663e148eab2d] ...
	I0708 13:04:48.416153    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 663e148eab2d"
	I0708 13:04:48.427773    3932 logs.go:123] Gathering logs for kube-scheduler [caa2559e6578] ...
	I0708 13:04:48.427789    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa2559e6578"
	I0708 13:04:48.440072    3932 logs.go:123] Gathering logs for kube-controller-manager [ab6316c47d83] ...
	I0708 13:04:48.440082    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab6316c47d83"
	I0708 13:04:48.453821    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:04:48.453831    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:04:48.467954    3932 logs.go:123] Gathering logs for storage-provisioner [374ea76eccc3] ...
	I0708 13:04:48.467967    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374ea76eccc3"
	I0708 13:04:48.479941    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:04:48.479955    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:04:48.504136    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:04:48.504142    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:04:48.539684    3932 logs.go:123] Gathering logs for etcd [995ff223681d] ...
	I0708 13:04:48.539696    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995ff223681d"
	I0708 13:04:48.553997    3932 logs.go:123] Gathering logs for kube-scheduler [572a7b23b33d] ...
	I0708 13:04:48.554011    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 572a7b23b33d"
	I0708 13:04:48.569211    3932 logs.go:123] Gathering logs for kube-proxy [7fc889e2cef6] ...
	I0708 13:04:48.569221    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc889e2cef6"
	I0708 13:04:48.580864    3932 logs.go:123] Gathering logs for storage-provisioner [aed1a674fd24] ...
	I0708 13:04:48.580877    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed1a674fd24"
	I0708 13:04:48.593202    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:04:48.593216    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:04:48.634828    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:04:48.634839    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:04:48.639295    3932 logs.go:123] Gathering logs for coredns [632152eccf25] ...
	I0708 13:04:48.639301    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 632152eccf25"
	I0708 13:04:48.650707    3932 logs.go:123] Gathering logs for kube-controller-manager [364e7abdea37] ...
	I0708 13:04:48.650717    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 364e7abdea37"
	I0708 13:04:48.668451    3932 logs.go:123] Gathering logs for kube-apiserver [b73a0038804f] ...
	I0708 13:04:48.668466    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b73a0038804f"
	I0708 13:04:48.687341    3932 logs.go:123] Gathering logs for kube-apiserver [27a315e0e1d2] ...
	I0708 13:04:48.687351    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a315e0e1d2"
	I0708 13:04:49.726616    4087 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0708 13:04:49.726656    4087 buildroot.go:166] provisioning hostname "stopped-upgrade-170000"
	I0708 13:04:49.726776    4087 main.go:141] libmachine: Using SSH client type: native
	I0708 13:04:49.727021    4087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10460e920] 0x104611180 <nil>  [] 0s} localhost 50565 <nil> <nil>}
	I0708 13:04:49.727032    4087 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-170000 && echo "stopped-upgrade-170000" | sudo tee /etc/hostname
	I0708 13:04:49.812087    4087 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-170000
	
	I0708 13:04:49.812156    4087 main.go:141] libmachine: Using SSH client type: native
	I0708 13:04:49.812309    4087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10460e920] 0x104611180 <nil>  [] 0s} localhost 50565 <nil> <nil>}
	I0708 13:04:49.812319    4087 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-170000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-170000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-170000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 13:04:49.890645    4087 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 13:04:49.890657    4087 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19195-1270/.minikube CaCertPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19195-1270/.minikube}
	I0708 13:04:49.890678    4087 buildroot.go:174] setting up certificates
	I0708 13:04:49.890687    4087 provision.go:84] configureAuth start
	I0708 13:04:49.890692    4087 provision.go:143] copyHostCerts
	I0708 13:04:49.890767    4087 exec_runner.go:144] found /Users/jenkins/minikube-integration/19195-1270/.minikube/cert.pem, removing ...
	I0708 13:04:49.890773    4087 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19195-1270/.minikube/cert.pem
	I0708 13:04:49.890904    4087 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19195-1270/.minikube/cert.pem (1123 bytes)
	I0708 13:04:49.891126    4087 exec_runner.go:144] found /Users/jenkins/minikube-integration/19195-1270/.minikube/key.pem, removing ...
	I0708 13:04:49.891130    4087 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19195-1270/.minikube/key.pem
	I0708 13:04:49.891185    4087 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19195-1270/.minikube/key.pem (1675 bytes)
	I0708 13:04:49.891304    4087 exec_runner.go:144] found /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.pem, removing ...
	I0708 13:04:49.891307    4087 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.pem
	I0708 13:04:49.891358    4087 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.pem (1078 bytes)
	I0708 13:04:49.891458    4087 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-170000 san=[127.0.0.1 localhost minikube stopped-upgrade-170000]
	I0708 13:04:50.001283    4087 provision.go:177] copyRemoteCerts
	I0708 13:04:50.001320    4087 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 13:04:50.001329    4087 sshutil.go:53] new ssh client: &{IP:localhost Port:50565 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/stopped-upgrade-170000/id_rsa Username:docker}
	I0708 13:04:50.039622    4087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 13:04:50.046195    4087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0708 13:04:50.053323    4087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0708 13:04:50.060533    4087 provision.go:87] duration metric: took 169.845125ms to configureAuth
	I0708 13:04:50.060542    4087 buildroot.go:189] setting minikube options for container-runtime
	I0708 13:04:50.060653    4087 config.go:182] Loaded profile config "stopped-upgrade-170000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0708 13:04:50.060690    4087 main.go:141] libmachine: Using SSH client type: native
	I0708 13:04:50.060788    4087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10460e920] 0x104611180 <nil>  [] 0s} localhost 50565 <nil> <nil>}
	I0708 13:04:50.060792    4087 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0708 13:04:50.133884    4087 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0708 13:04:50.133895    4087 buildroot.go:70] root file system type: tmpfs
	I0708 13:04:50.133953    4087 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0708 13:04:50.134011    4087 main.go:141] libmachine: Using SSH client type: native
	I0708 13:04:50.134134    4087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10460e920] 0x104611180 <nil>  [] 0s} localhost 50565 <nil> <nil>}
	I0708 13:04:50.134167    4087 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0708 13:04:50.211500    4087 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0708 13:04:50.211561    4087 main.go:141] libmachine: Using SSH client type: native
	I0708 13:04:50.211698    4087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10460e920] 0x104611180 <nil>  [] 0s} localhost 50565 <nil> <nil>}
	I0708 13:04:50.211706    4087 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0708 13:04:50.598449    4087 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0708 13:04:50.598463    4087 machine.go:97] duration metric: took 971.691583ms to provisionDockerMachine
	I0708 13:04:50.598473    4087 start.go:293] postStartSetup for "stopped-upgrade-170000" (driver="qemu2")
	I0708 13:04:50.598479    4087 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 13:04:50.598543    4087 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 13:04:50.598553    4087 sshutil.go:53] new ssh client: &{IP:localhost Port:50565 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/stopped-upgrade-170000/id_rsa Username:docker}
	I0708 13:04:50.638714    4087 ssh_runner.go:195] Run: cat /etc/os-release
	I0708 13:04:50.640018    4087 info.go:137] Remote host: Buildroot 2021.02.12
	I0708 13:04:50.640025    4087 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19195-1270/.minikube/addons for local assets ...
	I0708 13:04:50.640117    4087 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19195-1270/.minikube/files for local assets ...
	I0708 13:04:50.640241    4087 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem -> 17672.pem in /etc/ssl/certs
	I0708 13:04:50.640364    4087 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0708 13:04:50.642735    4087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem --> /etc/ssl/certs/17672.pem (1708 bytes)
	I0708 13:04:50.649911    4087 start.go:296] duration metric: took 51.43525ms for postStartSetup
	I0708 13:04:50.649941    4087 fix.go:56] duration metric: took 20.930760042s for fixHost
	I0708 13:04:50.649975    4087 main.go:141] libmachine: Using SSH client type: native
	I0708 13:04:50.650085    4087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10460e920] 0x104611180 <nil>  [] 0s} localhost 50565 <nil> <nil>}
	I0708 13:04:50.650090    4087 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0708 13:04:50.726243    4087 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720469091.136840587
	
	I0708 13:04:50.726251    4087 fix.go:216] guest clock: 1720469091.136840587
	I0708 13:04:50.726259    4087 fix.go:229] Guest: 2024-07-08 13:04:51.136840587 -0700 PDT Remote: 2024-07-08 13:04:50.649944 -0700 PDT m=+21.047731959 (delta=486.896587ms)
	I0708 13:04:50.726275    4087 fix.go:200] guest clock delta is within tolerance: 486.896587ms
	I0708 13:04:50.726279    4087 start.go:83] releasing machines lock for "stopped-upgrade-170000", held for 21.007108959s
	I0708 13:04:50.726336    4087 ssh_runner.go:195] Run: cat /version.json
	I0708 13:04:50.726345    4087 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0708 13:04:50.726345    4087 sshutil.go:53] new ssh client: &{IP:localhost Port:50565 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/stopped-upgrade-170000/id_rsa Username:docker}
	I0708 13:04:50.726359    4087 sshutil.go:53] new ssh client: &{IP:localhost Port:50565 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/stopped-upgrade-170000/id_rsa Username:docker}
	W0708 13:04:50.726864    4087 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50687->127.0.0.1:50565: read: connection reset by peer
	I0708 13:04:50.726882    4087 retry.go:31] will retry after 289.126214ms: ssh: handshake failed: read tcp 127.0.0.1:50687->127.0.0.1:50565: read: connection reset by peer
	W0708 13:04:50.764384    4087 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0708 13:04:50.764429    4087 ssh_runner.go:195] Run: systemctl --version
	I0708 13:04:50.766128    4087 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0708 13:04:50.767767    4087 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0708 13:04:50.767803    4087 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0708 13:04:50.770430    4087 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0708 13:04:50.775407    4087 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0708 13:04:50.775417    4087 start.go:494] detecting cgroup driver to use...
	I0708 13:04:50.775501    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 13:04:50.782270    4087 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0708 13:04:50.785708    4087 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0708 13:04:50.788965    4087 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0708 13:04:50.789000    4087 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0708 13:04:50.792086    4087 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0708 13:04:50.795460    4087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0708 13:04:50.798971    4087 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0708 13:04:50.802183    4087 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0708 13:04:50.805453    4087 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0708 13:04:50.808367    4087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0708 13:04:50.811020    4087 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0708 13:04:50.814205    4087 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 13:04:50.817205    4087 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 13:04:50.819933    4087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 13:04:50.898515    4087 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0708 13:04:50.909378    4087 start.go:494] detecting cgroup driver to use...
	I0708 13:04:50.909444    4087 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0708 13:04:50.916569    4087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 13:04:50.921512    4087 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0708 13:04:50.927653    4087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 13:04:50.932064    4087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0708 13:04:50.936399    4087 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0708 13:04:50.997831    4087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0708 13:04:51.003405    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 13:04:51.009899    4087 ssh_runner.go:195] Run: which cri-dockerd
	I0708 13:04:51.011358    4087 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0708 13:04:51.014983    4087 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0708 13:04:51.021830    4087 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0708 13:04:51.105700    4087 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0708 13:04:51.190754    4087 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0708 13:04:51.190820    4087 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0708 13:04:51.200344    4087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 13:04:51.290608    4087 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0708 13:04:52.420242    4087 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.12963575s)
	I0708 13:04:52.420321    4087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0708 13:04:52.425066    4087 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0708 13:04:52.431400    4087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0708 13:04:52.436201    4087 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0708 13:04:52.512103    4087 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0708 13:04:52.592764    4087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 13:04:52.672687    4087 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0708 13:04:52.678271    4087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0708 13:04:52.683080    4087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 13:04:52.763908    4087 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0708 13:04:52.802537    4087 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0708 13:04:52.802608    4087 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0708 13:04:52.804642    4087 start.go:562] Will wait 60s for crictl version
	I0708 13:04:52.804695    4087 ssh_runner.go:195] Run: which crictl
	I0708 13:04:52.806215    4087 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0708 13:04:52.821528    4087 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0708 13:04:52.821592    4087 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0708 13:04:52.838457    4087 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0708 13:04:52.860236    4087 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0708 13:04:52.860356    4087 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0708 13:04:52.861620    4087 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 13:04:52.865723    4087 kubeadm.go:877] updating cluster {Name:stopped-upgrade-170000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50600 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-170000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0708 13:04:52.865767    4087 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0708 13:04:52.865807    4087 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0708 13:04:52.880207    4087 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0708 13:04:52.880220    4087 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0708 13:04:52.880263    4087 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0708 13:04:52.883294    4087 ssh_runner.go:195] Run: which lz4
	I0708 13:04:52.884576    4087 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0708 13:04:52.885733    4087 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0708 13:04:52.885743    4087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0708 13:04:53.825510    4087 docker.go:649] duration metric: took 940.993ms to copy over tarball
	I0708 13:04:53.825567    4087 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0708 13:04:51.204041    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:04:55.003584    4087 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.178028166s)
	I0708 13:04:55.003601    4087 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0708 13:04:55.019041    4087 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0708 13:04:55.022237    4087 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0708 13:04:55.027177    4087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 13:04:55.107571    4087 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0708 13:04:56.620818    4087 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.513274584s)
	I0708 13:04:56.620916    4087 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0708 13:04:56.639336    4087 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0708 13:04:56.639346    4087 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0708 13:04:56.639352    4087 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0708 13:04:56.643896    4087 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 13:04:56.645471    4087 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0708 13:04:56.647454    4087 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 13:04:56.647515    4087 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0708 13:04:56.649017    4087 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0708 13:04:56.649122    4087 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0708 13:04:56.650669    4087 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0708 13:04:56.650794    4087 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0708 13:04:56.652113    4087 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0708 13:04:56.652203    4087 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0708 13:04:56.653169    4087 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0708 13:04:56.653261    4087 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0708 13:04:56.654190    4087 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0708 13:04:56.654275    4087 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0708 13:04:56.655107    4087 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0708 13:04:56.655739    4087 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0708 13:04:57.108505    4087 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0708 13:04:57.120747    4087 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0708 13:04:57.120769    4087 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0708 13:04:57.120818    4087 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	W0708 13:04:57.122814    4087 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0708 13:04:57.122901    4087 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0708 13:04:57.131982    4087 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0708 13:04:57.139422    4087 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0708 13:04:57.139449    4087 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0708 13:04:57.139501    4087 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0708 13:04:57.149569    4087 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0708 13:04:57.149675    4087 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0708 13:04:57.151304    4087 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0708 13:04:57.151316    4087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0708 13:04:57.155019    4087 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0708 13:04:57.165438    4087 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0708 13:04:57.174412    4087 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0708 13:04:57.174434    4087 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0708 13:04:57.174489    4087 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0708 13:04:57.177665    4087 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0708 13:04:57.196516    4087 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0708 13:04:57.197921    4087 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0708 13:04:57.197941    4087 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0708 13:04:57.197982    4087 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0708 13:04:57.207981    4087 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0708 13:04:57.208090    4087 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0708 13:04:57.217298    4087 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0708 13:04:57.217321    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0708 13:04:57.228243    4087 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0708 13:04:57.228266    4087 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0708 13:04:57.228297    4087 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0708 13:04:57.228307    4087 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0708 13:04:57.228318    4087 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0708 13:04:57.228332    4087 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0708 13:04:57.231915    4087 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	W0708 13:04:57.239850    4087 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0708 13:04:57.239962    4087 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 13:04:57.247030    4087 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0708 13:04:57.247146    4087 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0708 13:04:57.247158    4087 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0708 13:04:57.247190    4087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0708 13:04:57.294871    4087 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0708 13:04:57.294897    4087 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0708 13:04:57.294902    4087 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0708 13:04:57.294903    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0708 13:04:57.294935    4087 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0708 13:04:57.295445    4087 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0708 13:04:57.295456    4087 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0708 13:04:57.295461    4087 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0708 13:04:57.295466    4087 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 13:04:57.295508    4087 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 13:04:57.295508    4087 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0708 13:04:57.295543    4087 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0708 13:04:57.295559    4087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0708 13:04:57.371864    4087 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0708 13:04:57.371894    4087 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0708 13:04:57.371900    4087 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0708 13:04:57.372004    4087 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0708 13:04:57.378851    4087 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0708 13:04:57.378874    4087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0708 13:04:57.446240    4087 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0708 13:04:57.446254    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0708 13:04:57.827739    4087 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0708 13:04:57.827766    4087 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0708 13:04:57.827771    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0708 13:04:57.980688    4087 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0708 13:04:57.980730    4087 cache_images.go:92] duration metric: took 1.34141s to LoadCachedImages
	W0708 13:04:57.980771    4087 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0708 13:04:57.980777    4087 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0708 13:04:57.980830    4087 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-170000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-170000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0708 13:04:57.980900    4087 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0708 13:04:57.994213    4087 cni.go:84] Creating CNI manager for ""
	I0708 13:04:57.994228    4087 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0708 13:04:57.994234    4087 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0708 13:04:57.994243    4087 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-170000 NodeName:stopped-upgrade-170000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0708 13:04:57.994313    4087 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-170000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0708 13:04:57.994369    4087 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0708 13:04:57.997711    4087 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 13:04:57.997742    4087 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0708 13:04:58.000913    4087 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0708 13:04:58.005940    4087 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 13:04:58.010850    4087 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0708 13:04:58.016017    4087 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0708 13:04:58.017312    4087 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 13:04:58.021342    4087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 13:04:58.101905    4087 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 13:04:58.107525    4087 certs.go:68] Setting up /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/stopped-upgrade-170000 for IP: 10.0.2.15
	I0708 13:04:58.107535    4087 certs.go:194] generating shared ca certs ...
	I0708 13:04:58.107543    4087 certs.go:226] acquiring lock for ca certs: {Name:mka13b605a6983b2618b91f3a0bdec43c132a4e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 13:04:58.107709    4087 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.key
	I0708 13:04:58.107954    4087 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/proxy-client-ca.key
	I0708 13:04:58.107964    4087 certs.go:256] generating profile certs ...
	I0708 13:04:58.108179    4087 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/stopped-upgrade-170000/client.key
	I0708 13:04:58.108197    4087 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/stopped-upgrade-170000/apiserver.key.c425be07
	I0708 13:04:58.108209    4087 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/stopped-upgrade-170000/apiserver.crt.c425be07 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0708 13:04:58.263782    4087 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/stopped-upgrade-170000/apiserver.crt.c425be07 ...
	I0708 13:04:58.263797    4087 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/stopped-upgrade-170000/apiserver.crt.c425be07: {Name:mk115bf0da0e1aa0b5826bc251335868038dfc84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 13:04:58.264306    4087 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/stopped-upgrade-170000/apiserver.key.c425be07 ...
	I0708 13:04:58.264314    4087 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/stopped-upgrade-170000/apiserver.key.c425be07: {Name:mkffaac2e55ffdfdcc2f53b96f73fb178800d26f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 13:04:58.264468    4087 certs.go:381] copying /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/stopped-upgrade-170000/apiserver.crt.c425be07 -> /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/stopped-upgrade-170000/apiserver.crt
	I0708 13:04:58.264751    4087 certs.go:385] copying /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/stopped-upgrade-170000/apiserver.key.c425be07 -> /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/stopped-upgrade-170000/apiserver.key
	I0708 13:04:58.265020    4087 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/stopped-upgrade-170000/proxy-client.key
	I0708 13:04:58.265162    4087 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/1767.pem (1338 bytes)
	W0708 13:04:58.265299    4087 certs.go:480] ignoring /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/1767_empty.pem, impossibly tiny 0 bytes
	I0708 13:04:58.265307    4087 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca-key.pem (1679 bytes)
	I0708 13:04:58.265336    4087 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem (1078 bytes)
	I0708 13:04:58.265360    4087 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem (1123 bytes)
	I0708 13:04:58.265388    4087 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/key.pem (1675 bytes)
	I0708 13:04:58.265441    4087 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem (1708 bytes)
	I0708 13:04:58.265808    4087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 13:04:58.273006    4087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0708 13:04:58.279361    4087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 13:04:58.285674    4087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 13:04:58.292645    4087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/stopped-upgrade-170000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0708 13:04:58.298830    4087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/stopped-upgrade-170000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0708 13:04:58.305390    4087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/stopped-upgrade-170000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 13:04:58.312672    4087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/stopped-upgrade-170000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0708 13:04:58.319005    4087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 13:04:58.325654    4087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/1767.pem --> /usr/share/ca-certificates/1767.pem (1338 bytes)
	I0708 13:04:58.332850    4087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem --> /usr/share/ca-certificates/17672.pem (1708 bytes)
	I0708 13:04:58.339581    4087 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 13:04:58.344396    4087 ssh_runner.go:195] Run: openssl version
	I0708 13:04:58.346194    4087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 13:04:58.349540    4087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 13:04:58.351186    4087 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 13:04:58.351207    4087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 13:04:58.352987    4087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 13:04:58.356081    4087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1767.pem && ln -fs /usr/share/ca-certificates/1767.pem /etc/ssl/certs/1767.pem"
	I0708 13:04:58.358793    4087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1767.pem
	I0708 13:04:58.360146    4087 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  8 19:34 /usr/share/ca-certificates/1767.pem
	I0708 13:04:58.360165    4087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1767.pem
	I0708 13:04:58.361908    4087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1767.pem /etc/ssl/certs/51391683.0"
	I0708 13:04:58.365215    4087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17672.pem && ln -fs /usr/share/ca-certificates/17672.pem /etc/ssl/certs/17672.pem"
	I0708 13:04:58.368442    4087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17672.pem
	I0708 13:04:58.369765    4087 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  8 19:34 /usr/share/ca-certificates/17672.pem
	I0708 13:04:58.369783    4087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17672.pem
	I0708 13:04:58.371530    4087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17672.pem /etc/ssl/certs/3ec20f2e.0"
	I0708 13:04:58.374449    4087 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 13:04:58.375978    4087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0708 13:04:58.378195    4087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0708 13:04:58.380215    4087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0708 13:04:58.382223    4087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0708 13:04:58.384018    4087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0708 13:04:58.385760    4087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0708 13:04:58.387609    4087 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-170000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50600 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-170000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0708 13:04:58.387679    4087 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0708 13:04:58.397508    4087 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0708 13:04:58.400573    4087 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0708 13:04:58.400580    4087 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0708 13:04:58.400582    4087 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0708 13:04:58.400604    4087 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0708 13:04:58.403270    4087 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0708 13:04:58.403560    4087 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-170000" does not appear in /Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 13:04:58.403653    4087 kubeconfig.go:62] /Users/jenkins/minikube-integration/19195-1270/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-170000" cluster setting kubeconfig missing "stopped-upgrade-170000" context setting]
	I0708 13:04:58.403845    4087 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/kubeconfig: {Name:mkd06393ca6fb9ad91b614216d70dbd8a552e45d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 13:04:58.404317    4087 kapi.go:59] client config for stopped-upgrade-170000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/stopped-upgrade-170000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/stopped-upgrade-170000/client.key", CAFile:"/Users/jenkins/minikube-integration/19195-1270/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10599f4f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0708 13:04:58.404754    4087 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0708 13:04:58.407303    4087 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-170000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0708 13:04:58.407307    4087 kubeadm.go:1154] stopping kube-system containers ...
	I0708 13:04:58.407342    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0708 13:04:58.418185    4087 docker.go:483] Stopping containers: [d192ae42697c 9693310828d2 fb1259fd60c1 7420b58631a6 aa9fa9821d3c 9744dceee4c2 367cf0bc5844 440f0ce24e45]
	I0708 13:04:58.418248    4087 ssh_runner.go:195] Run: docker stop d192ae42697c 9693310828d2 fb1259fd60c1 7420b58631a6 aa9fa9821d3c 9744dceee4c2 367cf0bc5844 440f0ce24e45
	I0708 13:04:58.428882    4087 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0708 13:04:58.434143    4087 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 13:04:58.437454    4087 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 13:04:58.437465    4087 kubeadm.go:156] found existing configuration files:
	
	I0708 13:04:58.437487    4087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50600 /etc/kubernetes/admin.conf
	I0708 13:04:58.440132    4087 kubeadm.go:162] "https://control-plane.minikube.internal:50600" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50600 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 13:04:58.440152    4087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 13:04:58.442700    4087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50600 /etc/kubernetes/kubelet.conf
	I0708 13:04:58.445637    4087 kubeadm.go:162] "https://control-plane.minikube.internal:50600" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50600 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 13:04:58.445660    4087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 13:04:58.448564    4087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50600 /etc/kubernetes/controller-manager.conf
	I0708 13:04:58.451276    4087 kubeadm.go:162] "https://control-plane.minikube.internal:50600" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50600 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 13:04:58.451296    4087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 13:04:58.454119    4087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50600 /etc/kubernetes/scheduler.conf
	I0708 13:04:58.457118    4087 kubeadm.go:162] "https://control-plane.minikube.internal:50600" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50600 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 13:04:58.457142    4087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 13:04:58.460026    4087 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 13:04:58.462828    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 13:04:58.485016    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 13:04:58.856750    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0708 13:04:58.992346    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 13:04:59.015541    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0708 13:04:59.042426    4087 api_server.go:52] waiting for apiserver process to appear ...
	I0708 13:04:59.042500    4087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 13:04:59.544680    4087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 13:04:56.204971    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:04:56.205099    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:04:56.233457    3932 logs.go:276] 2 containers: [b73a0038804f 27a315e0e1d2]
	I0708 13:04:56.233538    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:04:56.256556    3932 logs.go:276] 2 containers: [995ff223681d 663e148eab2d]
	I0708 13:04:56.256633    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:04:56.273121    3932 logs.go:276] 1 containers: [632152eccf25]
	I0708 13:04:56.273198    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:04:56.284308    3932 logs.go:276] 2 containers: [caa2559e6578 572a7b23b33d]
	I0708 13:04:56.284385    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:04:56.295643    3932 logs.go:276] 1 containers: [7fc889e2cef6]
	I0708 13:04:56.295724    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:04:56.307385    3932 logs.go:276] 2 containers: [364e7abdea37 ab6316c47d83]
	I0708 13:04:56.307455    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:04:56.318089    3932 logs.go:276] 0 containers: []
	W0708 13:04:56.318103    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:04:56.318166    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:04:56.329097    3932 logs.go:276] 2 containers: [aed1a674fd24 374ea76eccc3]
	I0708 13:04:56.329115    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:04:56.329121    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:04:56.333680    3932 logs.go:123] Gathering logs for kube-apiserver [b73a0038804f] ...
	I0708 13:04:56.333689    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b73a0038804f"
	I0708 13:04:56.348842    3932 logs.go:123] Gathering logs for kube-apiserver [27a315e0e1d2] ...
	I0708 13:04:56.348853    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a315e0e1d2"
	I0708 13:04:56.360958    3932 logs.go:123] Gathering logs for kube-proxy [7fc889e2cef6] ...
	I0708 13:04:56.360973    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc889e2cef6"
	I0708 13:04:56.373170    3932 logs.go:123] Gathering logs for kube-controller-manager [364e7abdea37] ...
	I0708 13:04:56.373182    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 364e7abdea37"
	I0708 13:04:56.391218    3932 logs.go:123] Gathering logs for kube-controller-manager [ab6316c47d83] ...
	I0708 13:04:56.391229    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab6316c47d83"
	I0708 13:04:56.406145    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:04:56.406158    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:04:56.418137    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:04:56.418152    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:04:56.459402    3932 logs.go:123] Gathering logs for coredns [632152eccf25] ...
	I0708 13:04:56.459411    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 632152eccf25"
	I0708 13:04:56.471182    3932 logs.go:123] Gathering logs for kube-scheduler [572a7b23b33d] ...
	I0708 13:04:56.471198    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 572a7b23b33d"
	I0708 13:04:56.486916    3932 logs.go:123] Gathering logs for storage-provisioner [374ea76eccc3] ...
	I0708 13:04:56.486927    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374ea76eccc3"
	I0708 13:04:56.499082    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:04:56.499094    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:04:56.537468    3932 logs.go:123] Gathering logs for etcd [663e148eab2d] ...
	I0708 13:04:56.537479    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 663e148eab2d"
	I0708 13:04:56.549890    3932 logs.go:123] Gathering logs for storage-provisioner [aed1a674fd24] ...
	I0708 13:04:56.549903    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed1a674fd24"
	I0708 13:04:56.562779    3932 logs.go:123] Gathering logs for etcd [995ff223681d] ...
	I0708 13:04:56.562792    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995ff223681d"
	I0708 13:04:56.577518    3932 logs.go:123] Gathering logs for kube-scheduler [caa2559e6578] ...
	I0708 13:04:56.577529    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa2559e6578"
	I0708 13:04:56.590570    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:04:56.590582    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:04:59.116578    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:05:00.044540    4087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 13:05:00.049009    4087 api_server.go:72] duration metric: took 1.006613s to wait for apiserver process to appear ...
	I0708 13:05:00.049022    4087 api_server.go:88] waiting for apiserver healthz status ...
	I0708 13:05:00.049030    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:05:04.118826    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:05:04.119271    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:05:04.155016    3932 logs.go:276] 2 containers: [b73a0038804f 27a315e0e1d2]
	I0708 13:05:04.155163    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:05:04.175422    3932 logs.go:276] 2 containers: [995ff223681d 663e148eab2d]
	I0708 13:05:04.175528    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:05:04.191223    3932 logs.go:276] 1 containers: [632152eccf25]
	I0708 13:05:04.191304    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:05:04.207910    3932 logs.go:276] 2 containers: [caa2559e6578 572a7b23b33d]
	I0708 13:05:04.207986    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:05:04.218415    3932 logs.go:276] 1 containers: [7fc889e2cef6]
	I0708 13:05:04.218488    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:05:04.230798    3932 logs.go:276] 2 containers: [364e7abdea37 ab6316c47d83]
	I0708 13:05:04.230871    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:05:04.249860    3932 logs.go:276] 0 containers: []
	W0708 13:05:04.249873    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:05:04.249930    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:05:04.263048    3932 logs.go:276] 2 containers: [aed1a674fd24 374ea76eccc3]
	I0708 13:05:04.263066    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:05:04.263070    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:05:04.286192    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:05:04.286201    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:05:04.298315    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:05:04.298327    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:05:04.302723    3932 logs.go:123] Gathering logs for kube-scheduler [caa2559e6578] ...
	I0708 13:05:04.302732    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa2559e6578"
	I0708 13:05:04.314326    3932 logs.go:123] Gathering logs for kube-scheduler [572a7b23b33d] ...
	I0708 13:05:04.314337    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 572a7b23b33d"
	I0708 13:05:04.329816    3932 logs.go:123] Gathering logs for storage-provisioner [374ea76eccc3] ...
	I0708 13:05:04.329827    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374ea76eccc3"
	I0708 13:05:04.341926    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:05:04.341938    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:05:04.383674    3932 logs.go:123] Gathering logs for coredns [632152eccf25] ...
	I0708 13:05:04.383687    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 632152eccf25"
	I0708 13:05:04.395509    3932 logs.go:123] Gathering logs for kube-controller-manager [ab6316c47d83] ...
	I0708 13:05:04.395520    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab6316c47d83"
	I0708 13:05:04.409962    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:05:04.409972    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:05:04.447026    3932 logs.go:123] Gathering logs for kube-apiserver [27a315e0e1d2] ...
	I0708 13:05:04.447036    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a315e0e1d2"
	I0708 13:05:04.459616    3932 logs.go:123] Gathering logs for kube-proxy [7fc889e2cef6] ...
	I0708 13:05:04.459626    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc889e2cef6"
	I0708 13:05:04.471515    3932 logs.go:123] Gathering logs for kube-controller-manager [364e7abdea37] ...
	I0708 13:05:04.471528    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 364e7abdea37"
	I0708 13:05:04.492813    3932 logs.go:123] Gathering logs for storage-provisioner [aed1a674fd24] ...
	I0708 13:05:04.492826    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed1a674fd24"
	I0708 13:05:04.504947    3932 logs.go:123] Gathering logs for kube-apiserver [b73a0038804f] ...
	I0708 13:05:04.504958    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b73a0038804f"
	I0708 13:05:04.519338    3932 logs.go:123] Gathering logs for etcd [995ff223681d] ...
	I0708 13:05:04.519347    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995ff223681d"
	I0708 13:05:04.533703    3932 logs.go:123] Gathering logs for etcd [663e148eab2d] ...
	I0708 13:05:04.533713    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 663e148eab2d"
	I0708 13:05:05.051009    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:05:05.051049    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:05:07.050643    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:05:10.051713    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:05:10.051760    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:05:12.052874    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:05:12.053172    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:05:12.082264    3932 logs.go:276] 2 containers: [b73a0038804f 27a315e0e1d2]
	I0708 13:05:12.082388    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:05:12.100572    3932 logs.go:276] 2 containers: [995ff223681d 663e148eab2d]
	I0708 13:05:12.100663    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:05:12.114078    3932 logs.go:276] 1 containers: [632152eccf25]
	I0708 13:05:12.114164    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:05:12.126037    3932 logs.go:276] 2 containers: [caa2559e6578 572a7b23b33d]
	I0708 13:05:12.126109    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:05:12.136598    3932 logs.go:276] 1 containers: [7fc889e2cef6]
	I0708 13:05:12.136661    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:05:12.147127    3932 logs.go:276] 2 containers: [364e7abdea37 ab6316c47d83]
	I0708 13:05:12.147194    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:05:12.157717    3932 logs.go:276] 0 containers: []
	W0708 13:05:12.157733    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:05:12.157787    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:05:12.168521    3932 logs.go:276] 2 containers: [aed1a674fd24 374ea76eccc3]
	I0708 13:05:12.168539    3932 logs.go:123] Gathering logs for kube-controller-manager [364e7abdea37] ...
	I0708 13:05:12.168545    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 364e7abdea37"
	I0708 13:05:12.186466    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:05:12.186479    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:05:12.193793    3932 logs.go:123] Gathering logs for etcd [663e148eab2d] ...
	I0708 13:05:12.193800    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 663e148eab2d"
	I0708 13:05:12.212704    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:05:12.212720    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:05:12.234902    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:05:12.234912    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:05:12.254368    3932 logs.go:123] Gathering logs for kube-apiserver [b73a0038804f] ...
	I0708 13:05:12.254382    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b73a0038804f"
	I0708 13:05:12.268505    3932 logs.go:123] Gathering logs for storage-provisioner [aed1a674fd24] ...
	I0708 13:05:12.268515    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed1a674fd24"
	I0708 13:05:12.280077    3932 logs.go:123] Gathering logs for kube-scheduler [572a7b23b33d] ...
	I0708 13:05:12.280087    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 572a7b23b33d"
	I0708 13:05:12.295894    3932 logs.go:123] Gathering logs for storage-provisioner [374ea76eccc3] ...
	I0708 13:05:12.295906    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374ea76eccc3"
	I0708 13:05:12.307661    3932 logs.go:123] Gathering logs for etcd [995ff223681d] ...
	I0708 13:05:12.307674    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995ff223681d"
	I0708 13:05:12.322010    3932 logs.go:123] Gathering logs for coredns [632152eccf25] ...
	I0708 13:05:12.322021    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 632152eccf25"
	I0708 13:05:12.333891    3932 logs.go:123] Gathering logs for kube-apiserver [27a315e0e1d2] ...
	I0708 13:05:12.333902    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a315e0e1d2"
	I0708 13:05:12.346356    3932 logs.go:123] Gathering logs for kube-scheduler [caa2559e6578] ...
	I0708 13:05:12.346368    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa2559e6578"
	I0708 13:05:12.358778    3932 logs.go:123] Gathering logs for kube-proxy [7fc889e2cef6] ...
	I0708 13:05:12.358788    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc889e2cef6"
	I0708 13:05:12.370783    3932 logs.go:123] Gathering logs for kube-controller-manager [ab6316c47d83] ...
	I0708 13:05:12.370794    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab6316c47d83"
	I0708 13:05:12.385675    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:05:12.385686    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:05:12.431192    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:05:12.431203    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:05:14.968827    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:05:15.051923    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:05:15.051960    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:05:19.971000    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:05:19.971288    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:05:20.000750    3932 logs.go:276] 2 containers: [b73a0038804f 27a315e0e1d2]
	I0708 13:05:20.000881    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:05:20.018680    3932 logs.go:276] 2 containers: [995ff223681d 663e148eab2d]
	I0708 13:05:20.018773    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:05:20.032646    3932 logs.go:276] 1 containers: [632152eccf25]
	I0708 13:05:20.032713    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:05:20.045758    3932 logs.go:276] 2 containers: [caa2559e6578 572a7b23b33d]
	I0708 13:05:20.045838    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:05:20.055977    3932 logs.go:276] 1 containers: [7fc889e2cef6]
	I0708 13:05:20.056044    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:05:20.066655    3932 logs.go:276] 2 containers: [364e7abdea37 ab6316c47d83]
	I0708 13:05:20.066725    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:05:20.077509    3932 logs.go:276] 0 containers: []
	W0708 13:05:20.077520    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:05:20.077580    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:05:20.088286    3932 logs.go:276] 2 containers: [aed1a674fd24 374ea76eccc3]
	I0708 13:05:20.088305    3932 logs.go:123] Gathering logs for kube-apiserver [b73a0038804f] ...
	I0708 13:05:20.088311    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b73a0038804f"
	I0708 13:05:20.104197    3932 logs.go:123] Gathering logs for kube-apiserver [27a315e0e1d2] ...
	I0708 13:05:20.104210    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a315e0e1d2"
	I0708 13:05:20.116693    3932 logs.go:123] Gathering logs for etcd [995ff223681d] ...
	I0708 13:05:20.116703    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995ff223681d"
	I0708 13:05:20.130844    3932 logs.go:123] Gathering logs for etcd [663e148eab2d] ...
	I0708 13:05:20.130854    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 663e148eab2d"
	I0708 13:05:20.142370    3932 logs.go:123] Gathering logs for coredns [632152eccf25] ...
	I0708 13:05:20.142383    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 632152eccf25"
	I0708 13:05:20.153814    3932 logs.go:123] Gathering logs for kube-proxy [7fc889e2cef6] ...
	I0708 13:05:20.153827    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc889e2cef6"
	I0708 13:05:20.165927    3932 logs.go:123] Gathering logs for storage-provisioner [aed1a674fd24] ...
	I0708 13:05:20.165938    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed1a674fd24"
	I0708 13:05:20.177391    3932 logs.go:123] Gathering logs for storage-provisioner [374ea76eccc3] ...
	I0708 13:05:20.177401    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374ea76eccc3"
	I0708 13:05:20.189050    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:05:20.189061    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:05:20.211137    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:05:20.211145    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:05:20.233339    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:05:20.233350    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:05:20.274738    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:05:20.274758    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:05:20.279822    3932 logs.go:123] Gathering logs for kube-scheduler [caa2559e6578] ...
	I0708 13:05:20.279832    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa2559e6578"
	I0708 13:05:20.291728    3932 logs.go:123] Gathering logs for kube-scheduler [572a7b23b33d] ...
	I0708 13:05:20.291739    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 572a7b23b33d"
	I0708 13:05:20.311441    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:05:20.311452    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:05:20.381287    3932 logs.go:123] Gathering logs for kube-controller-manager [364e7abdea37] ...
	I0708 13:05:20.381298    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 364e7abdea37"
	I0708 13:05:20.399100    3932 logs.go:123] Gathering logs for kube-controller-manager [ab6316c47d83] ...
	I0708 13:05:20.399109    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab6316c47d83"
	I0708 13:05:20.052348    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:05:20.052369    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:05:22.915470    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:05:25.053009    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:05:25.053078    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:05:27.917666    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:05:27.917911    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:05:27.940238    3932 logs.go:276] 2 containers: [b73a0038804f 27a315e0e1d2]
	I0708 13:05:27.940349    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:05:27.955776    3932 logs.go:276] 2 containers: [995ff223681d 663e148eab2d]
	I0708 13:05:27.955856    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:05:27.968611    3932 logs.go:276] 1 containers: [632152eccf25]
	I0708 13:05:27.968673    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:05:27.979635    3932 logs.go:276] 2 containers: [caa2559e6578 572a7b23b33d]
	I0708 13:05:27.979707    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:05:27.990389    3932 logs.go:276] 1 containers: [7fc889e2cef6]
	I0708 13:05:27.990456    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:05:28.000583    3932 logs.go:276] 2 containers: [364e7abdea37 ab6316c47d83]
	I0708 13:05:28.000650    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:05:28.010797    3932 logs.go:276] 0 containers: []
	W0708 13:05:28.010806    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:05:28.010862    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:05:28.022121    3932 logs.go:276] 2 containers: [aed1a674fd24 374ea76eccc3]
	I0708 13:05:28.022139    3932 logs.go:123] Gathering logs for etcd [663e148eab2d] ...
	I0708 13:05:28.022144    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 663e148eab2d"
	I0708 13:05:28.034841    3932 logs.go:123] Gathering logs for coredns [632152eccf25] ...
	I0708 13:05:28.034852    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 632152eccf25"
	I0708 13:05:28.045769    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:05:28.045779    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:05:28.050024    3932 logs.go:123] Gathering logs for kube-scheduler [caa2559e6578] ...
	I0708 13:05:28.050030    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caa2559e6578"
	I0708 13:05:28.065475    3932 logs.go:123] Gathering logs for kube-controller-manager [ab6316c47d83] ...
	I0708 13:05:28.065485    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab6316c47d83"
	I0708 13:05:28.079037    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:05:28.079048    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:05:28.091373    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:05:28.091385    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:05:28.125723    3932 logs.go:123] Gathering logs for kube-apiserver [27a315e0e1d2] ...
	I0708 13:05:28.125734    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a315e0e1d2"
	I0708 13:05:28.137783    3932 logs.go:123] Gathering logs for kube-proxy [7fc889e2cef6] ...
	I0708 13:05:28.137793    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc889e2cef6"
	I0708 13:05:28.157383    3932 logs.go:123] Gathering logs for kube-controller-manager [364e7abdea37] ...
	I0708 13:05:28.157392    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 364e7abdea37"
	I0708 13:05:28.182312    3932 logs.go:123] Gathering logs for storage-provisioner [aed1a674fd24] ...
	I0708 13:05:28.182322    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed1a674fd24"
	I0708 13:05:28.195824    3932 logs.go:123] Gathering logs for storage-provisioner [374ea76eccc3] ...
	I0708 13:05:28.195834    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374ea76eccc3"
	I0708 13:05:28.207224    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:05:28.207236    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:05:28.231726    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:05:28.231734    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:05:28.273981    3932 logs.go:123] Gathering logs for kube-apiserver [b73a0038804f] ...
	I0708 13:05:28.273993    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b73a0038804f"
	I0708 13:05:28.288075    3932 logs.go:123] Gathering logs for etcd [995ff223681d] ...
	I0708 13:05:28.288086    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995ff223681d"
	I0708 13:05:28.309170    3932 logs.go:123] Gathering logs for kube-scheduler [572a7b23b33d] ...
	I0708 13:05:28.309179    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 572a7b23b33d"
	I0708 13:05:30.830625    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:05:30.054149    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:05:30.054187    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:05:35.832818    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:05:35.832899    3932 kubeadm.go:591] duration metric: took 4m4.849406875s to restartPrimaryControlPlane
	W0708 13:05:35.832951    3932 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0708 13:05:35.832970    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0708 13:05:36.814195    3932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 13:05:36.819225    3932 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 13:05:36.822147    3932 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 13:05:36.824737    3932 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 13:05:36.824742    3932 kubeadm.go:156] found existing configuration files:
	
	I0708 13:05:36.824762    3932 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50391 /etc/kubernetes/admin.conf
	I0708 13:05:36.827540    3932 kubeadm.go:162] "https://control-plane.minikube.internal:50391" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50391 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 13:05:36.827566    3932 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 13:05:36.830832    3932 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50391 /etc/kubernetes/kubelet.conf
	I0708 13:05:36.833502    3932 kubeadm.go:162] "https://control-plane.minikube.internal:50391" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50391 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 13:05:36.833528    3932 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 13:05:36.836396    3932 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50391 /etc/kubernetes/controller-manager.conf
	I0708 13:05:36.839370    3932 kubeadm.go:162] "https://control-plane.minikube.internal:50391" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50391 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 13:05:36.839397    3932 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 13:05:36.842570    3932 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50391 /etc/kubernetes/scheduler.conf
	I0708 13:05:36.844936    3932 kubeadm.go:162] "https://control-plane.minikube.internal:50391" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50391 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 13:05:36.844958    3932 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 13:05:36.847664    3932 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0708 13:05:36.864313    3932 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0708 13:05:36.864397    3932 kubeadm.go:309] [preflight] Running pre-flight checks
	I0708 13:05:36.914115    3932 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0708 13:05:36.914187    3932 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0708 13:05:36.914235    3932 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0708 13:05:36.963148    3932 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0708 13:05:36.973270    3932 out.go:204]   - Generating certificates and keys ...
	I0708 13:05:36.973305    3932 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0708 13:05:36.973336    3932 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0708 13:05:36.973380    3932 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0708 13:05:36.973410    3932 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0708 13:05:36.973442    3932 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0708 13:05:36.973467    3932 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0708 13:05:36.973507    3932 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0708 13:05:36.973545    3932 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0708 13:05:36.973595    3932 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0708 13:05:36.974365    3932 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0708 13:05:36.974400    3932 kubeadm.go:309] [certs] Using the existing "sa" key
	I0708 13:05:36.974455    3932 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0708 13:05:37.229152    3932 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0708 13:05:37.400058    3932 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0708 13:05:37.525109    3932 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0708 13:05:37.878723    3932 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0708 13:05:37.906056    3932 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0708 13:05:37.906670    3932 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0708 13:05:37.906691    3932 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0708 13:05:37.977527    3932 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0708 13:05:35.055340    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:05:35.055389    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:05:37.980012    3932 out.go:204]   - Booting up control plane ...
	I0708 13:05:37.980085    3932 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0708 13:05:37.980130    3932 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0708 13:05:37.980164    3932 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0708 13:05:37.980220    3932 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0708 13:05:37.980294    3932 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0708 13:05:42.481604    3932 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.502394 seconds
	I0708 13:05:42.481711    3932 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0708 13:05:42.485947    3932 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0708 13:05:42.997248    3932 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0708 13:05:42.997387    3932 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-129000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0708 13:05:43.504423    3932 kubeadm.go:309] [bootstrap-token] Using token: hifjt8.wy8jakd0xhx8lfx2
	I0708 13:05:43.510767    3932 out.go:204]   - Configuring RBAC rules ...
	I0708 13:05:43.510852    3932 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0708 13:05:43.510918    3932 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0708 13:05:43.513087    3932 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0708 13:05:43.514492    3932 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0708 13:05:43.515584    3932 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0708 13:05:43.516644    3932 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0708 13:05:43.520428    3932 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0708 13:05:43.698619    3932 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0708 13:05:43.909033    3932 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0708 13:05:43.909285    3932 kubeadm.go:309] 
	I0708 13:05:43.909316    3932 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0708 13:05:43.909321    3932 kubeadm.go:309] 
	I0708 13:05:43.909360    3932 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0708 13:05:43.909366    3932 kubeadm.go:309] 
	I0708 13:05:43.909412    3932 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0708 13:05:43.909444    3932 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0708 13:05:43.909518    3932 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0708 13:05:43.909576    3932 kubeadm.go:309] 
	I0708 13:05:43.909626    3932 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0708 13:05:43.909636    3932 kubeadm.go:309] 
	I0708 13:05:43.909661    3932 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0708 13:05:43.909671    3932 kubeadm.go:309] 
	I0708 13:05:43.909701    3932 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0708 13:05:43.909779    3932 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0708 13:05:43.909863    3932 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0708 13:05:43.909871    3932 kubeadm.go:309] 
	I0708 13:05:43.909912    3932 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0708 13:05:43.909949    3932 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0708 13:05:43.909955    3932 kubeadm.go:309] 
	I0708 13:05:43.909995    3932 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token hifjt8.wy8jakd0xhx8lfx2 \
	I0708 13:05:43.910055    3932 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:230a71526e00c18db9a0775e630de2fb59560bfeed9e976d05ee095d6c2f986e \
	I0708 13:05:43.910067    3932 kubeadm.go:309] 	--control-plane 
	I0708 13:05:43.910071    3932 kubeadm.go:309] 
	I0708 13:05:43.910126    3932 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0708 13:05:43.910132    3932 kubeadm.go:309] 
	I0708 13:05:43.910173    3932 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token hifjt8.wy8jakd0xhx8lfx2 \
	I0708 13:05:43.910230    3932 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:230a71526e00c18db9a0775e630de2fb59560bfeed9e976d05ee095d6c2f986e 
	I0708 13:05:43.910305    3932 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0708 13:05:43.910311    3932 cni.go:84] Creating CNI manager for ""
	I0708 13:05:43.910319    3932 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0708 13:05:43.914276    3932 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0708 13:05:43.919212    3932 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0708 13:05:43.922208    3932 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0708 13:05:43.927736    3932 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0708 13:05:43.927792    3932 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 13:05:43.927821    3932 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-129000 minikube.k8s.io/updated_at=2024_07_08T13_05_43_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad minikube.k8s.io/name=running-upgrade-129000 minikube.k8s.io/primary=true
	I0708 13:05:43.969611    3932 kubeadm.go:1107] duration metric: took 41.865875ms to wait for elevateKubeSystemPrivileges
	I0708 13:05:43.969676    3932 ops.go:34] apiserver oom_adj: -16
	W0708 13:05:43.969801    3932 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0708 13:05:43.969809    3932 kubeadm.go:393] duration metric: took 4m13.027804917s to StartCluster
	I0708 13:05:43.969818    3932 settings.go:142] acquiring lock: {Name:mka0c397a57d617e1d77508d22cc3adb2edf5927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 13:05:43.969906    3932 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 13:05:43.970301    3932 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/kubeconfig: {Name:mkd06393ca6fb9ad91b614216d70dbd8a552e45d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 13:05:43.970515    3932 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 13:05:43.970599    3932 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0708 13:05:43.970634    3932 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-129000"
	I0708 13:05:43.970648    3932 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-129000"
	W0708 13:05:43.970651    3932 addons.go:243] addon storage-provisioner should already be in state true
	I0708 13:05:43.970656    3932 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-129000"
	I0708 13:05:43.970665    3932 host.go:66] Checking if "running-upgrade-129000" exists ...
	I0708 13:05:43.970673    3932 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-129000"
	I0708 13:05:43.970713    3932 config.go:182] Loaded profile config "running-upgrade-129000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0708 13:05:43.971086    3932 retry.go:31] will retry after 1.101944488s: connect: dial unix /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/running-upgrade-129000/monitor: connect: connection refused
	I0708 13:05:43.971803    3932 kapi.go:59] client config for running-upgrade-129000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/running-upgrade-129000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/running-upgrade-129000/client.key", CAFile:"/Users/jenkins/minikube-integration/19195-1270/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1043634f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0708 13:05:43.972145    3932 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-129000"
	W0708 13:05:43.972150    3932 addons.go:243] addon default-storageclass should already be in state true
	I0708 13:05:43.972158    3932 host.go:66] Checking if "running-upgrade-129000" exists ...
	I0708 13:05:43.972693    3932 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0708 13:05:43.972698    3932 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0708 13:05:43.972703    3932 sshutil.go:53] new ssh client: &{IP:localhost Port:50359 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/running-upgrade-129000/id_rsa Username:docker}
	I0708 13:05:43.975116    3932 out.go:177] * Verifying Kubernetes components...
	I0708 13:05:40.056950    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:05:40.056972    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:05:43.981097    3932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 13:05:44.069172    3932 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 13:05:44.074258    3932 api_server.go:52] waiting for apiserver process to appear ...
	I0708 13:05:44.074302    3932 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 13:05:44.078236    3932 api_server.go:72] duration metric: took 107.713209ms to wait for apiserver process to appear ...
	I0708 13:05:44.078244    3932 api_server.go:88] waiting for apiserver healthz status ...
	I0708 13:05:44.078250    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:05:44.147717    3932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0708 13:05:45.080537    3932 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 13:05:45.084565    3932 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 13:05:45.084581    3932 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0708 13:05:45.084600    3932 sshutil.go:53] new ssh client: &{IP:localhost Port:50359 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/running-upgrade-129000/id_rsa Username:docker}
	I0708 13:05:45.142191    3932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 13:05:45.058805    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:05:45.058852    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:05:49.079023    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:05:49.079074    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:05:50.061046    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:05:50.061093    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:05:54.079435    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:05:54.079482    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:05:55.063253    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:05:55.063293    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:05:59.079710    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:05:59.079731    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:06:00.064411    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:06:00.064508    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:06:00.075503    4087 logs.go:276] 2 containers: [6ea05f4d18cc 7420b58631a6]
	I0708 13:06:00.075581    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:06:00.086578    4087 logs.go:276] 2 containers: [1e89e3203798 9693310828d2]
	I0708 13:06:00.086652    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:06:00.096540    4087 logs.go:276] 1 containers: [98fa118fd098]
	I0708 13:06:00.096616    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:06:00.107033    4087 logs.go:276] 2 containers: [6dbdf148a964 d192ae42697c]
	I0708 13:06:00.107099    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:06:00.117721    4087 logs.go:276] 1 containers: [750b11fad6e2]
	I0708 13:06:00.117789    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:06:00.128010    4087 logs.go:276] 2 containers: [e8da15772873 fb1259fd60c1]
	I0708 13:06:00.128084    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:06:00.138234    4087 logs.go:276] 0 containers: []
	W0708 13:06:00.138247    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:06:00.138306    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:06:00.149516    4087 logs.go:276] 2 containers: [7d824b616b14 514c8e511812]
	I0708 13:06:00.149538    4087 logs.go:123] Gathering logs for storage-provisioner [7d824b616b14] ...
	I0708 13:06:00.149544    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d824b616b14"
	I0708 13:06:00.162169    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:06:00.162186    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:06:00.275034    4087 logs.go:123] Gathering logs for coredns [98fa118fd098] ...
	I0708 13:06:00.275047    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98fa118fd098"
	I0708 13:06:00.286255    4087 logs.go:123] Gathering logs for kube-proxy [750b11fad6e2] ...
	I0708 13:06:00.286268    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750b11fad6e2"
	I0708 13:06:00.298674    4087 logs.go:123] Gathering logs for kube-scheduler [d192ae42697c] ...
	I0708 13:06:00.298686    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d192ae42697c"
	I0708 13:06:00.314250    4087 logs.go:123] Gathering logs for kube-controller-manager [e8da15772873] ...
	I0708 13:06:00.314263    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8da15772873"
	I0708 13:06:00.331728    4087 logs.go:123] Gathering logs for kube-controller-manager [fb1259fd60c1] ...
	I0708 13:06:00.331739    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1259fd60c1"
	I0708 13:06:00.346437    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:06:00.346448    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:06:00.372516    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:06:00.372524    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:06:00.384523    4087 logs.go:123] Gathering logs for etcd [1e89e3203798] ...
	I0708 13:06:00.384534    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e89e3203798"
	I0708 13:06:00.398221    4087 logs.go:123] Gathering logs for etcd [9693310828d2] ...
	I0708 13:06:00.398231    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9693310828d2"
	I0708 13:06:00.413487    4087 logs.go:123] Gathering logs for kube-scheduler [6dbdf148a964] ...
	I0708 13:06:00.413501    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbdf148a964"
	I0708 13:06:00.425179    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:06:00.425193    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:06:00.429786    4087 logs.go:123] Gathering logs for storage-provisioner [514c8e511812] ...
	I0708 13:06:00.429794    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514c8e511812"
	I0708 13:06:00.441356    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:06:00.441368    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:06:00.481337    4087 logs.go:123] Gathering logs for kube-apiserver [6ea05f4d18cc] ...
	I0708 13:06:00.481345    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea05f4d18cc"
	I0708 13:06:00.495675    4087 logs.go:123] Gathering logs for kube-apiserver [7420b58631a6] ...
	I0708 13:06:00.495688    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7420b58631a6"
	I0708 13:06:03.023428    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:06:04.079813    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:06:04.079858    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:06:08.025598    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:06:08.025754    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:06:08.037923    4087 logs.go:276] 2 containers: [6ea05f4d18cc 7420b58631a6]
	I0708 13:06:08.037997    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:06:08.049326    4087 logs.go:276] 2 containers: [1e89e3203798 9693310828d2]
	I0708 13:06:08.049401    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:06:08.068651    4087 logs.go:276] 1 containers: [98fa118fd098]
	I0708 13:06:08.068726    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:06:08.079357    4087 logs.go:276] 2 containers: [6dbdf148a964 d192ae42697c]
	I0708 13:06:08.079435    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:06:08.089582    4087 logs.go:276] 1 containers: [750b11fad6e2]
	I0708 13:06:08.089664    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:06:08.100548    4087 logs.go:276] 2 containers: [e8da15772873 fb1259fd60c1]
	I0708 13:06:08.100615    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:06:08.110244    4087 logs.go:276] 0 containers: []
	W0708 13:06:08.110256    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:06:08.110330    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:06:08.120756    4087 logs.go:276] 2 containers: [7d824b616b14 514c8e511812]
	I0708 13:06:08.120774    4087 logs.go:123] Gathering logs for kube-apiserver [7420b58631a6] ...
	I0708 13:06:08.120778    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7420b58631a6"
	I0708 13:06:08.146873    4087 logs.go:123] Gathering logs for etcd [9693310828d2] ...
	I0708 13:06:08.146887    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9693310828d2"
	I0708 13:06:08.161077    4087 logs.go:123] Gathering logs for kube-scheduler [6dbdf148a964] ...
	I0708 13:06:08.161093    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbdf148a964"
	I0708 13:06:08.173478    4087 logs.go:123] Gathering logs for kube-proxy [750b11fad6e2] ...
	I0708 13:06:08.173489    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750b11fad6e2"
	I0708 13:06:08.186076    4087 logs.go:123] Gathering logs for storage-provisioner [514c8e511812] ...
	I0708 13:06:08.186089    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514c8e511812"
	I0708 13:06:08.197993    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:06:08.198005    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:06:08.210685    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:06:08.210697    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:06:08.248761    4087 logs.go:123] Gathering logs for kube-apiserver [6ea05f4d18cc] ...
	I0708 13:06:08.248770    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea05f4d18cc"
	I0708 13:06:08.263766    4087 logs.go:123] Gathering logs for kube-scheduler [d192ae42697c] ...
	I0708 13:06:08.263777    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d192ae42697c"
	I0708 13:06:08.278794    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:06:08.278803    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:06:08.305211    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:06:08.305219    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:06:08.345162    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:06:08.345172    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:06:08.349646    4087 logs.go:123] Gathering logs for etcd [1e89e3203798] ...
	I0708 13:06:08.349651    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e89e3203798"
	I0708 13:06:08.363272    4087 logs.go:123] Gathering logs for coredns [98fa118fd098] ...
	I0708 13:06:08.363284    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98fa118fd098"
	I0708 13:06:08.378875    4087 logs.go:123] Gathering logs for kube-controller-manager [e8da15772873] ...
	I0708 13:06:08.378889    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8da15772873"
	I0708 13:06:08.397074    4087 logs.go:123] Gathering logs for kube-controller-manager [fb1259fd60c1] ...
	I0708 13:06:08.397087    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1259fd60c1"
	I0708 13:06:08.410655    4087 logs.go:123] Gathering logs for storage-provisioner [7d824b616b14] ...
	I0708 13:06:08.410666    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d824b616b14"
	I0708 13:06:09.080004    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:06:09.080026    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:06:14.080268    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:06:14.080308    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0708 13:06:14.439129    3932 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0708 13:06:14.443583    3932 out.go:177] * Enabled addons: storage-provisioner
	I0708 13:06:10.927885    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:06:14.449384    3932 addons.go:510] duration metric: took 30.479701917s for enable addons: enabled=[storage-provisioner]
	I0708 13:06:15.928154    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:06:15.928335    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:06:15.939755    4087 logs.go:276] 2 containers: [6ea05f4d18cc 7420b58631a6]
	I0708 13:06:15.939836    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:06:15.952398    4087 logs.go:276] 2 containers: [1e89e3203798 9693310828d2]
	I0708 13:06:15.952482    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:06:15.963390    4087 logs.go:276] 1 containers: [98fa118fd098]
	I0708 13:06:15.963464    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:06:15.974166    4087 logs.go:276] 2 containers: [6dbdf148a964 d192ae42697c]
	I0708 13:06:15.974232    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:06:15.996192    4087 logs.go:276] 1 containers: [750b11fad6e2]
	I0708 13:06:15.996267    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:06:16.006877    4087 logs.go:276] 2 containers: [e8da15772873 fb1259fd60c1]
	I0708 13:06:16.006947    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:06:16.017140    4087 logs.go:276] 0 containers: []
	W0708 13:06:16.017151    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:06:16.017204    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:06:16.032261    4087 logs.go:276] 2 containers: [7d824b616b14 514c8e511812]
	I0708 13:06:16.032280    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:06:16.032285    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:06:16.045049    4087 logs.go:123] Gathering logs for kube-apiserver [6ea05f4d18cc] ...
	I0708 13:06:16.045059    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea05f4d18cc"
	I0708 13:06:16.064933    4087 logs.go:123] Gathering logs for etcd [1e89e3203798] ...
	I0708 13:06:16.064945    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e89e3203798"
	I0708 13:06:16.077969    4087 logs.go:123] Gathering logs for kube-controller-manager [e8da15772873] ...
	I0708 13:06:16.077982    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8da15772873"
	I0708 13:06:16.096299    4087 logs.go:123] Gathering logs for storage-provisioner [7d824b616b14] ...
	I0708 13:06:16.096309    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d824b616b14"
	I0708 13:06:16.107592    4087 logs.go:123] Gathering logs for storage-provisioner [514c8e511812] ...
	I0708 13:06:16.107607    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514c8e511812"
	I0708 13:06:16.118800    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:06:16.118812    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:06:16.143200    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:06:16.143209    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:06:16.185259    4087 logs.go:123] Gathering logs for etcd [9693310828d2] ...
	I0708 13:06:16.185273    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9693310828d2"
	I0708 13:06:16.199576    4087 logs.go:123] Gathering logs for kube-scheduler [6dbdf148a964] ...
	I0708 13:06:16.199589    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbdf148a964"
	I0708 13:06:16.211709    4087 logs.go:123] Gathering logs for kube-scheduler [d192ae42697c] ...
	I0708 13:06:16.211720    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d192ae42697c"
	I0708 13:06:16.226289    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:06:16.226298    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:06:16.266091    4087 logs.go:123] Gathering logs for kube-controller-manager [fb1259fd60c1] ...
	I0708 13:06:16.266100    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1259fd60c1"
	I0708 13:06:16.283250    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:06:16.283262    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:06:16.287794    4087 logs.go:123] Gathering logs for kube-apiserver [7420b58631a6] ...
	I0708 13:06:16.287803    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7420b58631a6"
	I0708 13:06:16.312530    4087 logs.go:123] Gathering logs for coredns [98fa118fd098] ...
	I0708 13:06:16.312544    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98fa118fd098"
	I0708 13:06:16.328255    4087 logs.go:123] Gathering logs for kube-proxy [750b11fad6e2] ...
	I0708 13:06:16.328267    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750b11fad6e2"
	I0708 13:06:18.841900    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:06:19.080648    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:06:19.080699    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:06:23.844131    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:06:23.844252    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:06:23.855112    4087 logs.go:276] 2 containers: [6ea05f4d18cc 7420b58631a6]
	I0708 13:06:23.855185    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:06:23.865767    4087 logs.go:276] 2 containers: [1e89e3203798 9693310828d2]
	I0708 13:06:23.865836    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:06:23.878101    4087 logs.go:276] 1 containers: [98fa118fd098]
	I0708 13:06:23.878165    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:06:23.889176    4087 logs.go:276] 2 containers: [6dbdf148a964 d192ae42697c]
	I0708 13:06:23.889244    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:06:23.900201    4087 logs.go:276] 1 containers: [750b11fad6e2]
	I0708 13:06:23.900276    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:06:23.914316    4087 logs.go:276] 2 containers: [e8da15772873 fb1259fd60c1]
	I0708 13:06:23.914385    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:06:23.932984    4087 logs.go:276] 0 containers: []
	W0708 13:06:23.932994    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:06:23.933051    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:06:23.943736    4087 logs.go:276] 2 containers: [7d824b616b14 514c8e511812]
	I0708 13:06:23.943754    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:06:23.943760    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:06:23.978976    4087 logs.go:123] Gathering logs for kube-apiserver [6ea05f4d18cc] ...
	I0708 13:06:23.978987    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea05f4d18cc"
	I0708 13:06:23.993752    4087 logs.go:123] Gathering logs for etcd [1e89e3203798] ...
	I0708 13:06:23.993763    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e89e3203798"
	I0708 13:06:24.007680    4087 logs.go:123] Gathering logs for kube-scheduler [d192ae42697c] ...
	I0708 13:06:24.007690    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d192ae42697c"
	I0708 13:06:24.035406    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:06:24.035417    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:06:24.073137    4087 logs.go:123] Gathering logs for kube-apiserver [7420b58631a6] ...
	I0708 13:06:24.073145    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7420b58631a6"
	I0708 13:06:24.098685    4087 logs.go:123] Gathering logs for etcd [9693310828d2] ...
	I0708 13:06:24.098694    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9693310828d2"
	I0708 13:06:24.113409    4087 logs.go:123] Gathering logs for kube-scheduler [6dbdf148a964] ...
	I0708 13:06:24.113420    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbdf148a964"
	I0708 13:06:24.125140    4087 logs.go:123] Gathering logs for kube-proxy [750b11fad6e2] ...
	I0708 13:06:24.125156    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750b11fad6e2"
	I0708 13:06:24.137024    4087 logs.go:123] Gathering logs for kube-controller-manager [e8da15772873] ...
	I0708 13:06:24.137036    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8da15772873"
	I0708 13:06:24.154885    4087 logs.go:123] Gathering logs for kube-controller-manager [fb1259fd60c1] ...
	I0708 13:06:24.154895    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1259fd60c1"
	I0708 13:06:24.168478    4087 logs.go:123] Gathering logs for storage-provisioner [7d824b616b14] ...
	I0708 13:06:24.168488    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d824b616b14"
	I0708 13:06:24.184219    4087 logs.go:123] Gathering logs for storage-provisioner [514c8e511812] ...
	I0708 13:06:24.184228    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514c8e511812"
	I0708 13:06:24.196038    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:06:24.196054    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:06:24.208189    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:06:24.208200    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:06:24.212555    4087 logs.go:123] Gathering logs for coredns [98fa118fd098] ...
	I0708 13:06:24.212563    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98fa118fd098"
	I0708 13:06:24.223916    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:06:24.223928    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:06:24.081182    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:06:24.081202    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:06:26.751178    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:06:29.081775    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:06:29.081825    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:06:31.753450    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:06:31.753605    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:06:31.776249    4087 logs.go:276] 2 containers: [6ea05f4d18cc 7420b58631a6]
	I0708 13:06:31.776327    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:06:31.788835    4087 logs.go:276] 2 containers: [1e89e3203798 9693310828d2]
	I0708 13:06:31.788907    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:06:31.804341    4087 logs.go:276] 1 containers: [98fa118fd098]
	I0708 13:06:31.804403    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:06:31.814911    4087 logs.go:276] 2 containers: [6dbdf148a964 d192ae42697c]
	I0708 13:06:31.814982    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:06:31.829767    4087 logs.go:276] 1 containers: [750b11fad6e2]
	I0708 13:06:31.829836    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:06:31.840516    4087 logs.go:276] 2 containers: [e8da15772873 fb1259fd60c1]
	I0708 13:06:31.840588    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:06:31.851942    4087 logs.go:276] 0 containers: []
	W0708 13:06:31.851952    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:06:31.852008    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:06:31.862292    4087 logs.go:276] 2 containers: [7d824b616b14 514c8e511812]
	I0708 13:06:31.862310    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:06:31.862315    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:06:31.866914    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:06:31.866920    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:06:31.905542    4087 logs.go:123] Gathering logs for kube-apiserver [6ea05f4d18cc] ...
	I0708 13:06:31.905553    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea05f4d18cc"
	I0708 13:06:31.920026    4087 logs.go:123] Gathering logs for storage-provisioner [7d824b616b14] ...
	I0708 13:06:31.920035    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d824b616b14"
	I0708 13:06:31.932828    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:06:31.932838    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:06:31.972360    4087 logs.go:123] Gathering logs for etcd [9693310828d2] ...
	I0708 13:06:31.972368    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9693310828d2"
	I0708 13:06:31.986182    4087 logs.go:123] Gathering logs for kube-controller-manager [e8da15772873] ...
	I0708 13:06:31.986192    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8da15772873"
	I0708 13:06:32.003839    4087 logs.go:123] Gathering logs for coredns [98fa118fd098] ...
	I0708 13:06:32.003853    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98fa118fd098"
	I0708 13:06:32.021720    4087 logs.go:123] Gathering logs for kube-scheduler [6dbdf148a964] ...
	I0708 13:06:32.021736    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbdf148a964"
	I0708 13:06:32.035112    4087 logs.go:123] Gathering logs for kube-scheduler [d192ae42697c] ...
	I0708 13:06:32.035125    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d192ae42697c"
	I0708 13:06:32.049883    4087 logs.go:123] Gathering logs for kube-proxy [750b11fad6e2] ...
	I0708 13:06:32.049894    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750b11fad6e2"
	I0708 13:06:32.066702    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:06:32.066714    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:06:32.091186    4087 logs.go:123] Gathering logs for kube-apiserver [7420b58631a6] ...
	I0708 13:06:32.091194    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7420b58631a6"
	I0708 13:06:32.115938    4087 logs.go:123] Gathering logs for etcd [1e89e3203798] ...
	I0708 13:06:32.115949    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e89e3203798"
	I0708 13:06:32.129823    4087 logs.go:123] Gathering logs for kube-controller-manager [fb1259fd60c1] ...
	I0708 13:06:32.129834    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1259fd60c1"
	I0708 13:06:32.143950    4087 logs.go:123] Gathering logs for storage-provisioner [514c8e511812] ...
	I0708 13:06:32.143961    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514c8e511812"
	I0708 13:06:32.154994    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:06:32.155010    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:06:34.083006    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:06:34.083030    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:06:34.669010    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:06:39.084507    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:06:39.084534    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:06:39.671099    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:06:39.671199    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:06:39.683985    4087 logs.go:276] 2 containers: [6ea05f4d18cc 7420b58631a6]
	I0708 13:06:39.684062    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:06:39.695542    4087 logs.go:276] 2 containers: [1e89e3203798 9693310828d2]
	I0708 13:06:39.695615    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:06:39.705829    4087 logs.go:276] 1 containers: [98fa118fd098]
	I0708 13:06:39.705906    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:06:39.716633    4087 logs.go:276] 2 containers: [6dbdf148a964 d192ae42697c]
	I0708 13:06:39.716701    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:06:39.726743    4087 logs.go:276] 1 containers: [750b11fad6e2]
	I0708 13:06:39.726808    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:06:39.737563    4087 logs.go:276] 2 containers: [e8da15772873 fb1259fd60c1]
	I0708 13:06:39.737628    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:06:39.747551    4087 logs.go:276] 0 containers: []
	W0708 13:06:39.747564    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:06:39.747627    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:06:39.758357    4087 logs.go:276] 2 containers: [7d824b616b14 514c8e511812]
	I0708 13:06:39.758382    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:06:39.758387    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:06:39.762865    4087 logs.go:123] Gathering logs for kube-apiserver [7420b58631a6] ...
	I0708 13:06:39.762871    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7420b58631a6"
	I0708 13:06:39.788444    4087 logs.go:123] Gathering logs for kube-scheduler [d192ae42697c] ...
	I0708 13:06:39.788455    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d192ae42697c"
	I0708 13:06:39.803845    4087 logs.go:123] Gathering logs for storage-provisioner [7d824b616b14] ...
	I0708 13:06:39.803856    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d824b616b14"
	I0708 13:06:39.815780    4087 logs.go:123] Gathering logs for coredns [98fa118fd098] ...
	I0708 13:06:39.815792    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98fa118fd098"
	I0708 13:06:39.826928    4087 logs.go:123] Gathering logs for kube-scheduler [6dbdf148a964] ...
	I0708 13:06:39.826942    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbdf148a964"
	I0708 13:06:39.844704    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:06:39.844714    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:06:39.869124    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:06:39.869133    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:06:39.882664    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:06:39.882675    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:06:39.919675    4087 logs.go:123] Gathering logs for etcd [9693310828d2] ...
	I0708 13:06:39.919691    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9693310828d2"
	I0708 13:06:39.934427    4087 logs.go:123] Gathering logs for kube-proxy [750b11fad6e2] ...
	I0708 13:06:39.934436    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750b11fad6e2"
	I0708 13:06:39.946645    4087 logs.go:123] Gathering logs for kube-controller-manager [e8da15772873] ...
	I0708 13:06:39.946659    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8da15772873"
	I0708 13:06:39.965201    4087 logs.go:123] Gathering logs for kube-controller-manager [fb1259fd60c1] ...
	I0708 13:06:39.965217    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1259fd60c1"
	I0708 13:06:39.979849    4087 logs.go:123] Gathering logs for storage-provisioner [514c8e511812] ...
	I0708 13:06:39.979860    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514c8e511812"
	I0708 13:06:39.991670    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:06:39.991680    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:06:40.028384    4087 logs.go:123] Gathering logs for kube-apiserver [6ea05f4d18cc] ...
	I0708 13:06:40.028392    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea05f4d18cc"
	I0708 13:06:40.048341    4087 logs.go:123] Gathering logs for etcd [1e89e3203798] ...
	I0708 13:06:40.048351    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e89e3203798"
	I0708 13:06:42.563163    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:06:44.086090    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:06:44.086200    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:06:44.110553    3932 logs.go:276] 1 containers: [063efc38d81d]
	I0708 13:06:44.110630    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:06:44.122197    3932 logs.go:276] 1 containers: [52eda3d8b3e7]
	I0708 13:06:44.122268    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:06:44.132661    3932 logs.go:276] 2 containers: [f585feadba35 12a2164c7181]
	I0708 13:06:44.132732    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:06:44.143490    3932 logs.go:276] 1 containers: [bb65792657e6]
	I0708 13:06:44.143561    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:06:44.153782    3932 logs.go:276] 1 containers: [814e848a6031]
	I0708 13:06:44.153848    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:06:44.170011    3932 logs.go:276] 1 containers: [4829cb3c03a2]
	I0708 13:06:44.170081    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:06:44.180495    3932 logs.go:276] 0 containers: []
	W0708 13:06:44.180507    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:06:44.180567    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:06:44.190520    3932 logs.go:276] 1 containers: [059ae42247ca]
	I0708 13:06:44.190541    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:06:44.190547    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:06:44.227879    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:06:44.227886    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:06:44.269791    3932 logs.go:123] Gathering logs for kube-apiserver [063efc38d81d] ...
	I0708 13:06:44.269802    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063efc38d81d"
	I0708 13:06:44.284383    3932 logs.go:123] Gathering logs for kube-scheduler [bb65792657e6] ...
	I0708 13:06:44.284396    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65792657e6"
	I0708 13:06:44.305134    3932 logs.go:123] Gathering logs for kube-proxy [814e848a6031] ...
	I0708 13:06:44.305145    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 814e848a6031"
	I0708 13:06:44.317711    3932 logs.go:123] Gathering logs for kube-controller-manager [4829cb3c03a2] ...
	I0708 13:06:44.317721    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4829cb3c03a2"
	I0708 13:06:44.335036    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:06:44.335046    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:06:44.360319    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:06:44.360329    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:06:44.365120    3932 logs.go:123] Gathering logs for etcd [52eda3d8b3e7] ...
	I0708 13:06:44.365129    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52eda3d8b3e7"
	I0708 13:06:44.379102    3932 logs.go:123] Gathering logs for coredns [f585feadba35] ...
	I0708 13:06:44.379111    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f585feadba35"
	I0708 13:06:44.390594    3932 logs.go:123] Gathering logs for coredns [12a2164c7181] ...
	I0708 13:06:44.390605    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a2164c7181"
	I0708 13:06:44.402316    3932 logs.go:123] Gathering logs for storage-provisioner [059ae42247ca] ...
	I0708 13:06:44.402329    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059ae42247ca"
	I0708 13:06:44.413820    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:06:44.413834    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:06:47.565317    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:06:47.565495    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:06:47.583176    4087 logs.go:276] 2 containers: [6ea05f4d18cc 7420b58631a6]
	I0708 13:06:47.583274    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:06:47.597135    4087 logs.go:276] 2 containers: [1e89e3203798 9693310828d2]
	I0708 13:06:47.597207    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:06:47.608891    4087 logs.go:276] 1 containers: [98fa118fd098]
	I0708 13:06:47.608964    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:06:47.626018    4087 logs.go:276] 2 containers: [6dbdf148a964 d192ae42697c]
	I0708 13:06:47.626086    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:06:47.636601    4087 logs.go:276] 1 containers: [750b11fad6e2]
	I0708 13:06:47.636675    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:06:47.647274    4087 logs.go:276] 2 containers: [e8da15772873 fb1259fd60c1]
	I0708 13:06:47.647343    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:06:47.660802    4087 logs.go:276] 0 containers: []
	W0708 13:06:47.660815    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:06:47.660871    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:06:47.671376    4087 logs.go:276] 2 containers: [7d824b616b14 514c8e511812]
	I0708 13:06:47.671398    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:06:47.671404    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:06:47.675590    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:06:47.675597    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:06:47.686782    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:06:47.686794    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:06:47.724322    4087 logs.go:123] Gathering logs for kube-controller-manager [e8da15772873] ...
	I0708 13:06:47.724333    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8da15772873"
	I0708 13:06:47.741665    4087 logs.go:123] Gathering logs for storage-provisioner [7d824b616b14] ...
	I0708 13:06:47.741675    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d824b616b14"
	I0708 13:06:47.756641    4087 logs.go:123] Gathering logs for kube-proxy [750b11fad6e2] ...
	I0708 13:06:47.756653    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750b11fad6e2"
	I0708 13:06:47.768093    4087 logs.go:123] Gathering logs for kube-controller-manager [fb1259fd60c1] ...
	I0708 13:06:47.768103    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1259fd60c1"
	I0708 13:06:47.781860    4087 logs.go:123] Gathering logs for kube-apiserver [6ea05f4d18cc] ...
	I0708 13:06:47.781872    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea05f4d18cc"
	I0708 13:06:47.795894    4087 logs.go:123] Gathering logs for kube-apiserver [7420b58631a6] ...
	I0708 13:06:47.795904    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7420b58631a6"
	I0708 13:06:47.821156    4087 logs.go:123] Gathering logs for etcd [1e89e3203798] ...
	I0708 13:06:47.821168    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e89e3203798"
	I0708 13:06:47.838145    4087 logs.go:123] Gathering logs for etcd [9693310828d2] ...
	I0708 13:06:47.838156    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9693310828d2"
	I0708 13:06:47.852453    4087 logs.go:123] Gathering logs for kube-scheduler [6dbdf148a964] ...
	I0708 13:06:47.852469    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbdf148a964"
	I0708 13:06:47.868106    4087 logs.go:123] Gathering logs for kube-scheduler [d192ae42697c] ...
	I0708 13:06:47.868120    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d192ae42697c"
	I0708 13:06:47.884935    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:06:47.884945    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:06:47.923461    4087 logs.go:123] Gathering logs for coredns [98fa118fd098] ...
	I0708 13:06:47.923472    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98fa118fd098"
	I0708 13:06:47.934660    4087 logs.go:123] Gathering logs for storage-provisioner [514c8e511812] ...
	I0708 13:06:47.934671    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514c8e511812"
	I0708 13:06:47.946755    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:06:47.946766    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:06:46.925993    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:06:50.472552    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:06:51.928159    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:06:51.928313    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:06:51.952164    3932 logs.go:276] 1 containers: [063efc38d81d]
	I0708 13:06:51.952258    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:06:51.964713    3932 logs.go:276] 1 containers: [52eda3d8b3e7]
	I0708 13:06:51.964785    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:06:51.976309    3932 logs.go:276] 2 containers: [f585feadba35 12a2164c7181]
	I0708 13:06:51.976386    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:06:51.986822    3932 logs.go:276] 1 containers: [bb65792657e6]
	I0708 13:06:51.986898    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:06:51.997465    3932 logs.go:276] 1 containers: [814e848a6031]
	I0708 13:06:51.997529    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:06:52.007628    3932 logs.go:276] 1 containers: [4829cb3c03a2]
	I0708 13:06:52.007698    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:06:52.017614    3932 logs.go:276] 0 containers: []
	W0708 13:06:52.017628    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:06:52.017688    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:06:52.028410    3932 logs.go:276] 1 containers: [059ae42247ca]
	I0708 13:06:52.028424    3932 logs.go:123] Gathering logs for kube-proxy [814e848a6031] ...
	I0708 13:06:52.028430    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 814e848a6031"
	I0708 13:06:52.040553    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:06:52.040564    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:06:52.065190    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:06:52.065197    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:06:52.104072    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:06:52.104079    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:06:52.139769    3932 logs.go:123] Gathering logs for kube-apiserver [063efc38d81d] ...
	I0708 13:06:52.139780    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063efc38d81d"
	I0708 13:06:52.155380    3932 logs.go:123] Gathering logs for kube-scheduler [bb65792657e6] ...
	I0708 13:06:52.155391    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65792657e6"
	I0708 13:06:52.170259    3932 logs.go:123] Gathering logs for kube-controller-manager [4829cb3c03a2] ...
	I0708 13:06:52.170270    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4829cb3c03a2"
	I0708 13:06:52.188861    3932 logs.go:123] Gathering logs for storage-provisioner [059ae42247ca] ...
	I0708 13:06:52.188871    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059ae42247ca"
	I0708 13:06:52.201175    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:06:52.201186    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:06:52.213553    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:06:52.213567    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:06:52.217876    3932 logs.go:123] Gathering logs for etcd [52eda3d8b3e7] ...
	I0708 13:06:52.217882    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52eda3d8b3e7"
	I0708 13:06:52.231534    3932 logs.go:123] Gathering logs for coredns [f585feadba35] ...
	I0708 13:06:52.231544    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f585feadba35"
	I0708 13:06:52.245131    3932 logs.go:123] Gathering logs for coredns [12a2164c7181] ...
	I0708 13:06:52.245142    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a2164c7181"
	I0708 13:06:54.758411    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:06:55.474746    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:06:55.474933    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:06:55.494049    4087 logs.go:276] 2 containers: [6ea05f4d18cc 7420b58631a6]
	I0708 13:06:55.494141    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:06:55.509137    4087 logs.go:276] 2 containers: [1e89e3203798 9693310828d2]
	I0708 13:06:55.509215    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:06:55.521768    4087 logs.go:276] 1 containers: [98fa118fd098]
	I0708 13:06:55.521833    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:06:55.532671    4087 logs.go:276] 2 containers: [6dbdf148a964 d192ae42697c]
	I0708 13:06:55.532734    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:06:55.543457    4087 logs.go:276] 1 containers: [750b11fad6e2]
	I0708 13:06:55.543529    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:06:55.554143    4087 logs.go:276] 2 containers: [e8da15772873 fb1259fd60c1]
	I0708 13:06:55.554206    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:06:55.565152    4087 logs.go:276] 0 containers: []
	W0708 13:06:55.565167    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:06:55.565225    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:06:55.576207    4087 logs.go:276] 2 containers: [7d824b616b14 514c8e511812]
	I0708 13:06:55.576228    4087 logs.go:123] Gathering logs for kube-apiserver [7420b58631a6] ...
	I0708 13:06:55.576233    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7420b58631a6"
	I0708 13:06:55.605649    4087 logs.go:123] Gathering logs for coredns [98fa118fd098] ...
	I0708 13:06:55.605662    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98fa118fd098"
	I0708 13:06:55.617133    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:06:55.617146    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:06:55.621544    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:06:55.621553    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:06:55.659230    4087 logs.go:123] Gathering logs for kube-proxy [750b11fad6e2] ...
	I0708 13:06:55.659242    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750b11fad6e2"
	I0708 13:06:55.675280    4087 logs.go:123] Gathering logs for storage-provisioner [7d824b616b14] ...
	I0708 13:06:55.675291    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d824b616b14"
	I0708 13:06:55.687066    4087 logs.go:123] Gathering logs for etcd [1e89e3203798] ...
	I0708 13:06:55.687078    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e89e3203798"
	I0708 13:06:55.700702    4087 logs.go:123] Gathering logs for etcd [9693310828d2] ...
	I0708 13:06:55.700713    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9693310828d2"
	I0708 13:06:55.715183    4087 logs.go:123] Gathering logs for kube-scheduler [6dbdf148a964] ...
	I0708 13:06:55.715194    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbdf148a964"
	I0708 13:06:55.727037    4087 logs.go:123] Gathering logs for kube-scheduler [d192ae42697c] ...
	I0708 13:06:55.727047    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d192ae42697c"
	I0708 13:06:55.741655    4087 logs.go:123] Gathering logs for kube-controller-manager [e8da15772873] ...
	I0708 13:06:55.741667    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8da15772873"
	I0708 13:06:55.762767    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:06:55.762777    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:06:55.787175    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:06:55.787185    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:06:55.827246    4087 logs.go:123] Gathering logs for kube-apiserver [6ea05f4d18cc] ...
	I0708 13:06:55.827259    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea05f4d18cc"
	I0708 13:06:55.842454    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:06:55.842465    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:06:55.854225    4087 logs.go:123] Gathering logs for kube-controller-manager [fb1259fd60c1] ...
	I0708 13:06:55.854237    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1259fd60c1"
	I0708 13:06:55.872315    4087 logs.go:123] Gathering logs for storage-provisioner [514c8e511812] ...
	I0708 13:06:55.872327    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514c8e511812"
	I0708 13:06:58.385713    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:06:59.760592    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:06:59.760808    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:06:59.785316    3932 logs.go:276] 1 containers: [063efc38d81d]
	I0708 13:06:59.785421    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:06:59.804756    3932 logs.go:276] 1 containers: [52eda3d8b3e7]
	I0708 13:06:59.804842    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:06:59.817206    3932 logs.go:276] 2 containers: [f585feadba35 12a2164c7181]
	I0708 13:06:59.817286    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:06:59.828290    3932 logs.go:276] 1 containers: [bb65792657e6]
	I0708 13:06:59.828360    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:06:59.838435    3932 logs.go:276] 1 containers: [814e848a6031]
	I0708 13:06:59.838504    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:06:59.849571    3932 logs.go:276] 1 containers: [4829cb3c03a2]
	I0708 13:06:59.849629    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:06:59.859695    3932 logs.go:276] 0 containers: []
	W0708 13:06:59.859707    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:06:59.859769    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:06:59.870296    3932 logs.go:276] 1 containers: [059ae42247ca]
	I0708 13:06:59.870313    3932 logs.go:123] Gathering logs for kube-controller-manager [4829cb3c03a2] ...
	I0708 13:06:59.870319    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4829cb3c03a2"
	I0708 13:06:59.887955    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:06:59.887969    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:06:59.911789    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:06:59.911796    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:06:59.923284    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:06:59.923294    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:06:59.927850    3932 logs.go:123] Gathering logs for coredns [12a2164c7181] ...
	I0708 13:06:59.927858    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a2164c7181"
	I0708 13:06:59.943425    3932 logs.go:123] Gathering logs for kube-scheduler [bb65792657e6] ...
	I0708 13:06:59.943437    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65792657e6"
	I0708 13:06:59.957648    3932 logs.go:123] Gathering logs for etcd [52eda3d8b3e7] ...
	I0708 13:06:59.957662    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52eda3d8b3e7"
	I0708 13:06:59.971429    3932 logs.go:123] Gathering logs for coredns [f585feadba35] ...
	I0708 13:06:59.971443    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f585feadba35"
	I0708 13:06:59.983396    3932 logs.go:123] Gathering logs for kube-proxy [814e848a6031] ...
	I0708 13:06:59.983409    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 814e848a6031"
	I0708 13:06:59.995055    3932 logs.go:123] Gathering logs for storage-provisioner [059ae42247ca] ...
	I0708 13:06:59.995070    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059ae42247ca"
	I0708 13:07:00.006631    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:07:00.006642    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:07:00.043681    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:07:00.043690    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:07:00.078777    3932 logs.go:123] Gathering logs for kube-apiserver [063efc38d81d] ...
	I0708 13:07:00.078791    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063efc38d81d"
	I0708 13:07:03.387829    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:07:03.387989    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:07:03.399968    4087 logs.go:276] 2 containers: [6ea05f4d18cc 7420b58631a6]
	I0708 13:07:03.400052    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:07:03.410697    4087 logs.go:276] 2 containers: [1e89e3203798 9693310828d2]
	I0708 13:07:03.410769    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:07:03.421246    4087 logs.go:276] 1 containers: [98fa118fd098]
	I0708 13:07:03.421313    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:07:03.431794    4087 logs.go:276] 2 containers: [6dbdf148a964 d192ae42697c]
	I0708 13:07:03.431872    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:07:03.442459    4087 logs.go:276] 1 containers: [750b11fad6e2]
	I0708 13:07:03.442519    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:07:03.452822    4087 logs.go:276] 2 containers: [e8da15772873 fb1259fd60c1]
	I0708 13:07:03.452888    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:07:03.462633    4087 logs.go:276] 0 containers: []
	W0708 13:07:03.462646    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:07:03.462713    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:07:03.473344    4087 logs.go:276] 2 containers: [7d824b616b14 514c8e511812]
	I0708 13:07:03.473362    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:07:03.473367    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:07:03.485471    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:07:03.485482    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:07:03.490038    4087 logs.go:123] Gathering logs for coredns [98fa118fd098] ...
	I0708 13:07:03.490044    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98fa118fd098"
	I0708 13:07:03.501304    4087 logs.go:123] Gathering logs for kube-controller-manager [fb1259fd60c1] ...
	I0708 13:07:03.501317    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1259fd60c1"
	I0708 13:07:03.515449    4087 logs.go:123] Gathering logs for storage-provisioner [7d824b616b14] ...
	I0708 13:07:03.515460    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d824b616b14"
	I0708 13:07:03.527349    4087 logs.go:123] Gathering logs for kube-scheduler [d192ae42697c] ...
	I0708 13:07:03.527359    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d192ae42697c"
	I0708 13:07:03.565540    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:07:03.565551    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:07:03.590748    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:07:03.590756    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:07:03.630232    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:07:03.630244    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:07:03.672838    4087 logs.go:123] Gathering logs for kube-apiserver [7420b58631a6] ...
	I0708 13:07:03.672848    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7420b58631a6"
	I0708 13:07:03.697425    4087 logs.go:123] Gathering logs for kube-scheduler [6dbdf148a964] ...
	I0708 13:07:03.697440    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbdf148a964"
	I0708 13:07:03.709294    4087 logs.go:123] Gathering logs for etcd [1e89e3203798] ...
	I0708 13:07:03.709305    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e89e3203798"
	I0708 13:07:03.723261    4087 logs.go:123] Gathering logs for kube-proxy [750b11fad6e2] ...
	I0708 13:07:03.723271    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750b11fad6e2"
	I0708 13:07:03.735069    4087 logs.go:123] Gathering logs for kube-apiserver [6ea05f4d18cc] ...
	I0708 13:07:03.735080    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea05f4d18cc"
	I0708 13:07:03.750096    4087 logs.go:123] Gathering logs for etcd [9693310828d2] ...
	I0708 13:07:03.750107    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9693310828d2"
	I0708 13:07:03.764773    4087 logs.go:123] Gathering logs for kube-controller-manager [e8da15772873] ...
	I0708 13:07:03.764782    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8da15772873"
	I0708 13:07:03.781684    4087 logs.go:123] Gathering logs for storage-provisioner [514c8e511812] ...
	I0708 13:07:03.781695    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514c8e511812"
	I0708 13:07:02.594721    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:07:06.295318    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:07:07.595815    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:07:07.596112    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:07:07.629970    3932 logs.go:276] 1 containers: [063efc38d81d]
	I0708 13:07:07.630100    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:07:07.650693    3932 logs.go:276] 1 containers: [52eda3d8b3e7]
	I0708 13:07:07.650792    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:07:07.664400    3932 logs.go:276] 2 containers: [f585feadba35 12a2164c7181]
	I0708 13:07:07.664478    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:07:07.676350    3932 logs.go:276] 1 containers: [bb65792657e6]
	I0708 13:07:07.676423    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:07:07.691586    3932 logs.go:276] 1 containers: [814e848a6031]
	I0708 13:07:07.691658    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:07:07.702689    3932 logs.go:276] 1 containers: [4829cb3c03a2]
	I0708 13:07:07.702761    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:07:07.713371    3932 logs.go:276] 0 containers: []
	W0708 13:07:07.713384    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:07:07.713440    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:07:07.723777    3932 logs.go:276] 1 containers: [059ae42247ca]
	I0708 13:07:07.723790    3932 logs.go:123] Gathering logs for coredns [f585feadba35] ...
	I0708 13:07:07.723795    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f585feadba35"
	I0708 13:07:07.735727    3932 logs.go:123] Gathering logs for kube-scheduler [bb65792657e6] ...
	I0708 13:07:07.735739    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65792657e6"
	I0708 13:07:07.750319    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:07:07.750339    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:07:07.789867    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:07:07.789877    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:07:07.794506    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:07:07.794513    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:07:07.830535    3932 logs.go:123] Gathering logs for kube-apiserver [063efc38d81d] ...
	I0708 13:07:07.830546    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063efc38d81d"
	I0708 13:07:07.850566    3932 logs.go:123] Gathering logs for storage-provisioner [059ae42247ca] ...
	I0708 13:07:07.850576    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059ae42247ca"
	I0708 13:07:07.862449    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:07:07.862463    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:07:07.885737    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:07:07.885748    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:07:07.898633    3932 logs.go:123] Gathering logs for etcd [52eda3d8b3e7] ...
	I0708 13:07:07.898644    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52eda3d8b3e7"
	I0708 13:07:07.912683    3932 logs.go:123] Gathering logs for coredns [12a2164c7181] ...
	I0708 13:07:07.912693    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a2164c7181"
	I0708 13:07:07.924523    3932 logs.go:123] Gathering logs for kube-proxy [814e848a6031] ...
	I0708 13:07:07.924533    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 814e848a6031"
	I0708 13:07:07.939207    3932 logs.go:123] Gathering logs for kube-controller-manager [4829cb3c03a2] ...
	I0708 13:07:07.939221    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4829cb3c03a2"
	I0708 13:07:10.458654    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:07:11.297168    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:07:11.297399    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:07:11.311352    4087 logs.go:276] 2 containers: [6ea05f4d18cc 7420b58631a6]
	I0708 13:07:11.311430    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:07:11.323177    4087 logs.go:276] 2 containers: [1e89e3203798 9693310828d2]
	I0708 13:07:11.323246    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:07:11.333564    4087 logs.go:276] 1 containers: [98fa118fd098]
	I0708 13:07:11.333634    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:07:11.347907    4087 logs.go:276] 2 containers: [6dbdf148a964 d192ae42697c]
	I0708 13:07:11.347979    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:07:11.358261    4087 logs.go:276] 1 containers: [750b11fad6e2]
	I0708 13:07:11.358333    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:07:11.368483    4087 logs.go:276] 2 containers: [e8da15772873 fb1259fd60c1]
	I0708 13:07:11.368557    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:07:11.378990    4087 logs.go:276] 0 containers: []
	W0708 13:07:11.379000    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:07:11.379058    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:07:11.390803    4087 logs.go:276] 2 containers: [7d824b616b14 514c8e511812]
	I0708 13:07:11.390821    4087 logs.go:123] Gathering logs for kube-apiserver [7420b58631a6] ...
	I0708 13:07:11.390826    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7420b58631a6"
	I0708 13:07:11.415215    4087 logs.go:123] Gathering logs for etcd [9693310828d2] ...
	I0708 13:07:11.415227    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9693310828d2"
	I0708 13:07:11.429511    4087 logs.go:123] Gathering logs for coredns [98fa118fd098] ...
	I0708 13:07:11.429521    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98fa118fd098"
	I0708 13:07:11.440898    4087 logs.go:123] Gathering logs for etcd [1e89e3203798] ...
	I0708 13:07:11.440911    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e89e3203798"
	I0708 13:07:11.454759    4087 logs.go:123] Gathering logs for kube-scheduler [d192ae42697c] ...
	I0708 13:07:11.454773    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d192ae42697c"
	I0708 13:07:11.469455    4087 logs.go:123] Gathering logs for storage-provisioner [7d824b616b14] ...
	I0708 13:07:11.469465    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d824b616b14"
	I0708 13:07:11.480830    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:07:11.480844    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:07:11.504506    4087 logs.go:123] Gathering logs for kube-controller-manager [fb1259fd60c1] ...
	I0708 13:07:11.504514    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1259fd60c1"
	I0708 13:07:11.518265    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:07:11.518279    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:07:11.556985    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:07:11.557001    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:07:11.561532    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:07:11.561539    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:07:11.597348    4087 logs.go:123] Gathering logs for kube-apiserver [6ea05f4d18cc] ...
	I0708 13:07:11.597362    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea05f4d18cc"
	I0708 13:07:11.611655    4087 logs.go:123] Gathering logs for kube-scheduler [6dbdf148a964] ...
	I0708 13:07:11.611665    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbdf148a964"
	I0708 13:07:11.623361    4087 logs.go:123] Gathering logs for kube-proxy [750b11fad6e2] ...
	I0708 13:07:11.623373    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750b11fad6e2"
	I0708 13:07:11.635658    4087 logs.go:123] Gathering logs for kube-controller-manager [e8da15772873] ...
	I0708 13:07:11.635667    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8da15772873"
	I0708 13:07:11.653538    4087 logs.go:123] Gathering logs for storage-provisioner [514c8e511812] ...
	I0708 13:07:11.653549    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514c8e511812"
	I0708 13:07:11.664617    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:07:11.664628    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:07:14.178482    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:07:15.459183    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:07:15.459344    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:07:15.477495    3932 logs.go:276] 1 containers: [063efc38d81d]
	I0708 13:07:15.477582    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:07:15.489167    3932 logs.go:276] 1 containers: [52eda3d8b3e7]
	I0708 13:07:15.489232    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:07:15.499416    3932 logs.go:276] 2 containers: [f585feadba35 12a2164c7181]
	I0708 13:07:15.499480    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:07:15.509734    3932 logs.go:276] 1 containers: [bb65792657e6]
	I0708 13:07:15.509805    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:07:15.523893    3932 logs.go:276] 1 containers: [814e848a6031]
	I0708 13:07:15.523966    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:07:15.534309    3932 logs.go:276] 1 containers: [4829cb3c03a2]
	I0708 13:07:15.534375    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:07:15.547257    3932 logs.go:276] 0 containers: []
	W0708 13:07:15.547269    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:07:15.547325    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:07:15.557602    3932 logs.go:276] 1 containers: [059ae42247ca]
	I0708 13:07:15.557618    3932 logs.go:123] Gathering logs for etcd [52eda3d8b3e7] ...
	I0708 13:07:15.557624    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52eda3d8b3e7"
	I0708 13:07:15.571534    3932 logs.go:123] Gathering logs for coredns [12a2164c7181] ...
	I0708 13:07:15.571547    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a2164c7181"
	I0708 13:07:15.584039    3932 logs.go:123] Gathering logs for kube-scheduler [bb65792657e6] ...
	I0708 13:07:15.584052    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65792657e6"
	I0708 13:07:15.598469    3932 logs.go:123] Gathering logs for storage-provisioner [059ae42247ca] ...
	I0708 13:07:15.598480    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059ae42247ca"
	I0708 13:07:15.609956    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:07:15.609965    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:07:15.624241    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:07:15.624251    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:07:15.659023    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:07:15.659036    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:07:15.664369    3932 logs.go:123] Gathering logs for kube-apiserver [063efc38d81d] ...
	I0708 13:07:15.664378    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063efc38d81d"
	I0708 13:07:15.680563    3932 logs.go:123] Gathering logs for coredns [f585feadba35] ...
	I0708 13:07:15.680574    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f585feadba35"
	I0708 13:07:15.692088    3932 logs.go:123] Gathering logs for kube-proxy [814e848a6031] ...
	I0708 13:07:15.692102    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 814e848a6031"
	I0708 13:07:15.704117    3932 logs.go:123] Gathering logs for kube-controller-manager [4829cb3c03a2] ...
	I0708 13:07:15.704126    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4829cb3c03a2"
	I0708 13:07:15.722135    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:07:15.722147    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:07:15.746775    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:07:15.746783    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:07:19.180738    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:07:19.180874    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:07:19.200714    4087 logs.go:276] 2 containers: [6ea05f4d18cc 7420b58631a6]
	I0708 13:07:19.200805    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:07:19.214815    4087 logs.go:276] 2 containers: [1e89e3203798 9693310828d2]
	I0708 13:07:19.214892    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:07:19.227690    4087 logs.go:276] 1 containers: [98fa118fd098]
	I0708 13:07:19.227750    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:07:19.243583    4087 logs.go:276] 2 containers: [6dbdf148a964 d192ae42697c]
	I0708 13:07:19.243652    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:07:19.253907    4087 logs.go:276] 1 containers: [750b11fad6e2]
	I0708 13:07:19.253976    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:07:19.264190    4087 logs.go:276] 2 containers: [e8da15772873 fb1259fd60c1]
	I0708 13:07:19.264258    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:07:19.274662    4087 logs.go:276] 0 containers: []
	W0708 13:07:19.274678    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:07:19.274732    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:07:19.284934    4087 logs.go:276] 2 containers: [7d824b616b14 514c8e511812]
	I0708 13:07:19.284955    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:07:19.284961    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:07:19.320160    4087 logs.go:123] Gathering logs for etcd [1e89e3203798] ...
	I0708 13:07:19.320170    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e89e3203798"
	I0708 13:07:19.335308    4087 logs.go:123] Gathering logs for kube-proxy [750b11fad6e2] ...
	I0708 13:07:19.335318    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750b11fad6e2"
	I0708 13:07:19.347036    4087 logs.go:123] Gathering logs for kube-controller-manager [fb1259fd60c1] ...
	I0708 13:07:19.347047    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1259fd60c1"
	I0708 13:07:19.361381    4087 logs.go:123] Gathering logs for storage-provisioner [7d824b616b14] ...
	I0708 13:07:19.361393    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d824b616b14"
	I0708 13:07:19.372996    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:07:19.373005    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:07:19.388633    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:07:19.388646    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:07:19.428933    4087 logs.go:123] Gathering logs for kube-scheduler [6dbdf148a964] ...
	I0708 13:07:19.428941    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbdf148a964"
	I0708 13:07:19.440468    4087 logs.go:123] Gathering logs for kube-scheduler [d192ae42697c] ...
	I0708 13:07:19.440481    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d192ae42697c"
	I0708 13:07:19.455256    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:07:19.455268    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:07:19.479646    4087 logs.go:123] Gathering logs for kube-apiserver [6ea05f4d18cc] ...
	I0708 13:07:19.479654    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea05f4d18cc"
	I0708 13:07:19.493382    4087 logs.go:123] Gathering logs for kube-apiserver [7420b58631a6] ...
	I0708 13:07:19.493395    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7420b58631a6"
	I0708 13:07:19.525414    4087 logs.go:123] Gathering logs for etcd [9693310828d2] ...
	I0708 13:07:19.525426    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9693310828d2"
	I0708 13:07:19.539637    4087 logs.go:123] Gathering logs for coredns [98fa118fd098] ...
	I0708 13:07:19.539651    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98fa118fd098"
	I0708 13:07:19.550604    4087 logs.go:123] Gathering logs for kube-controller-manager [e8da15772873] ...
	I0708 13:07:19.550616    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8da15772873"
	I0708 13:07:19.568420    4087 logs.go:123] Gathering logs for storage-provisioner [514c8e511812] ...
	I0708 13:07:19.568430    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514c8e511812"
	I0708 13:07:19.579609    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:07:19.579620    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:07:18.287431    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:07:22.085522    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:07:23.289701    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:07:23.289967    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:07:23.307238    3932 logs.go:276] 1 containers: [063efc38d81d]
	I0708 13:07:23.307333    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:07:23.320788    3932 logs.go:276] 1 containers: [52eda3d8b3e7]
	I0708 13:07:23.320942    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:07:23.333957    3932 logs.go:276] 2 containers: [f585feadba35 12a2164c7181]
	I0708 13:07:23.334020    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:07:23.344542    3932 logs.go:276] 1 containers: [bb65792657e6]
	I0708 13:07:23.344614    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:07:23.355654    3932 logs.go:276] 1 containers: [814e848a6031]
	I0708 13:07:23.355722    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:07:23.366108    3932 logs.go:276] 1 containers: [4829cb3c03a2]
	I0708 13:07:23.366177    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:07:23.375777    3932 logs.go:276] 0 containers: []
	W0708 13:07:23.375788    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:07:23.375842    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:07:23.390221    3932 logs.go:276] 1 containers: [059ae42247ca]
	I0708 13:07:23.390238    3932 logs.go:123] Gathering logs for kube-apiserver [063efc38d81d] ...
	I0708 13:07:23.390243    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063efc38d81d"
	I0708 13:07:23.405009    3932 logs.go:123] Gathering logs for coredns [f585feadba35] ...
	I0708 13:07:23.405023    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f585feadba35"
	I0708 13:07:23.417146    3932 logs.go:123] Gathering logs for coredns [12a2164c7181] ...
	I0708 13:07:23.417158    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a2164c7181"
	I0708 13:07:23.428714    3932 logs.go:123] Gathering logs for kube-scheduler [bb65792657e6] ...
	I0708 13:07:23.428725    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65792657e6"
	I0708 13:07:23.443769    3932 logs.go:123] Gathering logs for kube-proxy [814e848a6031] ...
	I0708 13:07:23.443783    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 814e848a6031"
	I0708 13:07:23.455873    3932 logs.go:123] Gathering logs for kube-controller-manager [4829cb3c03a2] ...
	I0708 13:07:23.455884    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4829cb3c03a2"
	I0708 13:07:23.473825    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:07:23.473836    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:07:23.478611    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:07:23.478619    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:07:23.517862    3932 logs.go:123] Gathering logs for storage-provisioner [059ae42247ca] ...
	I0708 13:07:23.517873    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059ae42247ca"
	I0708 13:07:23.529864    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:07:23.529874    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:07:23.541127    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:07:23.541137    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:07:23.564247    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:07:23.564255    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:07:23.601206    3932 logs.go:123] Gathering logs for etcd [52eda3d8b3e7] ...
	I0708 13:07:23.601212    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52eda3d8b3e7"
	I0708 13:07:27.087889    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:07:27.088154    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:07:27.119995    4087 logs.go:276] 2 containers: [6ea05f4d18cc 7420b58631a6]
	I0708 13:07:27.120128    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:07:27.137358    4087 logs.go:276] 2 containers: [1e89e3203798 9693310828d2]
	I0708 13:07:27.137457    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:07:27.150201    4087 logs.go:276] 1 containers: [98fa118fd098]
	I0708 13:07:27.150276    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:07:27.163059    4087 logs.go:276] 2 containers: [6dbdf148a964 d192ae42697c]
	I0708 13:07:27.163132    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:07:27.173753    4087 logs.go:276] 1 containers: [750b11fad6e2]
	I0708 13:07:27.173822    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:07:27.184660    4087 logs.go:276] 2 containers: [e8da15772873 fb1259fd60c1]
	I0708 13:07:27.184722    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:07:27.194846    4087 logs.go:276] 0 containers: []
	W0708 13:07:27.194857    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:07:27.194914    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:07:27.205592    4087 logs.go:276] 2 containers: [7d824b616b14 514c8e511812]
	I0708 13:07:27.205612    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:07:27.205618    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:07:27.241211    4087 logs.go:123] Gathering logs for kube-scheduler [d192ae42697c] ...
	I0708 13:07:27.241222    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d192ae42697c"
	I0708 13:07:27.260019    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:07:27.260030    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:07:27.272988    4087 logs.go:123] Gathering logs for kube-controller-manager [fb1259fd60c1] ...
	I0708 13:07:27.273000    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1259fd60c1"
	I0708 13:07:27.287942    4087 logs.go:123] Gathering logs for storage-provisioner [7d824b616b14] ...
	I0708 13:07:27.287953    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d824b616b14"
	I0708 13:07:27.299626    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:07:27.299636    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:07:27.323813    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:07:27.323821    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:07:27.328154    4087 logs.go:123] Gathering logs for kube-scheduler [6dbdf148a964] ...
	I0708 13:07:27.328159    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbdf148a964"
	I0708 13:07:27.339982    4087 logs.go:123] Gathering logs for kube-proxy [750b11fad6e2] ...
	I0708 13:07:27.339992    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750b11fad6e2"
	I0708 13:07:27.351992    4087 logs.go:123] Gathering logs for kube-controller-manager [e8da15772873] ...
	I0708 13:07:27.352002    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8da15772873"
	I0708 13:07:27.371713    4087 logs.go:123] Gathering logs for storage-provisioner [514c8e511812] ...
	I0708 13:07:27.371723    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514c8e511812"
	I0708 13:07:27.384485    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:07:27.384497    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:07:27.424162    4087 logs.go:123] Gathering logs for kube-apiserver [6ea05f4d18cc] ...
	I0708 13:07:27.424173    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea05f4d18cc"
	I0708 13:07:27.441099    4087 logs.go:123] Gathering logs for etcd [1e89e3203798] ...
	I0708 13:07:27.441109    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e89e3203798"
	I0708 13:07:27.454788    4087 logs.go:123] Gathering logs for kube-apiserver [7420b58631a6] ...
	I0708 13:07:27.454799    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7420b58631a6"
	I0708 13:07:27.479856    4087 logs.go:123] Gathering logs for etcd [9693310828d2] ...
	I0708 13:07:27.479867    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9693310828d2"
	I0708 13:07:27.494253    4087 logs.go:123] Gathering logs for coredns [98fa118fd098] ...
	I0708 13:07:27.494262    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98fa118fd098"
	I0708 13:07:26.117126    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:07:30.007223    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:07:31.119335    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:07:31.119555    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:07:31.147378    3932 logs.go:276] 1 containers: [063efc38d81d]
	I0708 13:07:31.147497    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:07:31.163419    3932 logs.go:276] 1 containers: [52eda3d8b3e7]
	I0708 13:07:31.163494    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:07:31.180280    3932 logs.go:276] 2 containers: [f585feadba35 12a2164c7181]
	I0708 13:07:31.180349    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:07:31.191796    3932 logs.go:276] 1 containers: [bb65792657e6]
	I0708 13:07:31.191860    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:07:31.202459    3932 logs.go:276] 1 containers: [814e848a6031]
	I0708 13:07:31.202533    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:07:31.213263    3932 logs.go:276] 1 containers: [4829cb3c03a2]
	I0708 13:07:31.213332    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:07:31.223985    3932 logs.go:276] 0 containers: []
	W0708 13:07:31.223997    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:07:31.224054    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:07:31.238856    3932 logs.go:276] 1 containers: [059ae42247ca]
	I0708 13:07:31.238870    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:07:31.238876    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:07:31.243834    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:07:31.243842    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:07:31.281046    3932 logs.go:123] Gathering logs for kube-apiserver [063efc38d81d] ...
	I0708 13:07:31.281057    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063efc38d81d"
	I0708 13:07:31.296304    3932 logs.go:123] Gathering logs for etcd [52eda3d8b3e7] ...
	I0708 13:07:31.296315    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52eda3d8b3e7"
	I0708 13:07:31.312128    3932 logs.go:123] Gathering logs for coredns [12a2164c7181] ...
	I0708 13:07:31.312141    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a2164c7181"
	I0708 13:07:31.324272    3932 logs.go:123] Gathering logs for kube-controller-manager [4829cb3c03a2] ...
	I0708 13:07:31.324283    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4829cb3c03a2"
	I0708 13:07:31.343647    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:07:31.343659    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:07:31.369029    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:07:31.369040    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:07:31.409812    3932 logs.go:123] Gathering logs for kube-scheduler [bb65792657e6] ...
	I0708 13:07:31.409826    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65792657e6"
	I0708 13:07:31.424932    3932 logs.go:123] Gathering logs for kube-proxy [814e848a6031] ...
	I0708 13:07:31.424943    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 814e848a6031"
	I0708 13:07:31.437377    3932 logs.go:123] Gathering logs for storage-provisioner [059ae42247ca] ...
	I0708 13:07:31.437388    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059ae42247ca"
	I0708 13:07:31.449846    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:07:31.449858    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:07:31.461916    3932 logs.go:123] Gathering logs for coredns [f585feadba35] ...
	I0708 13:07:31.461927    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f585feadba35"
	I0708 13:07:33.976400    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:07:35.009449    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:07:35.009590    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:07:35.024265    4087 logs.go:276] 2 containers: [6ea05f4d18cc 7420b58631a6]
	I0708 13:07:35.024341    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:07:35.037052    4087 logs.go:276] 2 containers: [1e89e3203798 9693310828d2]
	I0708 13:07:35.037130    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:07:35.047530    4087 logs.go:276] 1 containers: [98fa118fd098]
	I0708 13:07:35.047600    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:07:35.057867    4087 logs.go:276] 2 containers: [6dbdf148a964 d192ae42697c]
	I0708 13:07:35.057943    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:07:35.067768    4087 logs.go:276] 1 containers: [750b11fad6e2]
	I0708 13:07:35.067833    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:07:35.078275    4087 logs.go:276] 2 containers: [e8da15772873 fb1259fd60c1]
	I0708 13:07:35.078341    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:07:35.088380    4087 logs.go:276] 0 containers: []
	W0708 13:07:35.088394    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:07:35.088445    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:07:35.098977    4087 logs.go:276] 2 containers: [7d824b616b14 514c8e511812]
	I0708 13:07:35.098997    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:07:35.099003    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:07:35.138353    4087 logs.go:123] Gathering logs for etcd [1e89e3203798] ...
	I0708 13:07:35.138362    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e89e3203798"
	I0708 13:07:35.151703    4087 logs.go:123] Gathering logs for etcd [9693310828d2] ...
	I0708 13:07:35.151717    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9693310828d2"
	I0708 13:07:35.166078    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:07:35.166090    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:07:35.190121    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:07:35.190128    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:07:35.193854    4087 logs.go:123] Gathering logs for kube-scheduler [d192ae42697c] ...
	I0708 13:07:35.193862    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d192ae42697c"
	I0708 13:07:35.209024    4087 logs.go:123] Gathering logs for kube-proxy [750b11fad6e2] ...
	I0708 13:07:35.209035    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750b11fad6e2"
	I0708 13:07:35.221340    4087 logs.go:123] Gathering logs for storage-provisioner [514c8e511812] ...
	I0708 13:07:35.221353    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514c8e511812"
	I0708 13:07:35.235923    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:07:35.235936    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:07:35.251963    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:07:35.251973    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:07:35.286007    4087 logs.go:123] Gathering logs for kube-apiserver [6ea05f4d18cc] ...
	I0708 13:07:35.286019    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea05f4d18cc"
	I0708 13:07:35.304017    4087 logs.go:123] Gathering logs for kube-controller-manager [e8da15772873] ...
	I0708 13:07:35.304026    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8da15772873"
	I0708 13:07:35.321979    4087 logs.go:123] Gathering logs for kube-apiserver [7420b58631a6] ...
	I0708 13:07:35.321990    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7420b58631a6"
	I0708 13:07:35.352075    4087 logs.go:123] Gathering logs for coredns [98fa118fd098] ...
	I0708 13:07:35.352085    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98fa118fd098"
	I0708 13:07:35.362990    4087 logs.go:123] Gathering logs for kube-scheduler [6dbdf148a964] ...
	I0708 13:07:35.363002    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbdf148a964"
	I0708 13:07:35.374516    4087 logs.go:123] Gathering logs for kube-controller-manager [fb1259fd60c1] ...
	I0708 13:07:35.374526    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1259fd60c1"
	I0708 13:07:35.394067    4087 logs.go:123] Gathering logs for storage-provisioner [7d824b616b14] ...
	I0708 13:07:35.394079    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d824b616b14"
	I0708 13:07:37.907044    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:07:38.976245    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:07:38.976362    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:07:38.988731    3932 logs.go:276] 1 containers: [063efc38d81d]
	I0708 13:07:38.988810    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:07:38.999248    3932 logs.go:276] 1 containers: [52eda3d8b3e7]
	I0708 13:07:38.999316    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:07:39.009673    3932 logs.go:276] 2 containers: [f585feadba35 12a2164c7181]
	I0708 13:07:39.009746    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:07:39.020203    3932 logs.go:276] 1 containers: [bb65792657e6]
	I0708 13:07:39.020267    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:07:39.030618    3932 logs.go:276] 1 containers: [814e848a6031]
	I0708 13:07:39.030693    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:07:39.049270    3932 logs.go:276] 1 containers: [4829cb3c03a2]
	I0708 13:07:39.049342    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:07:39.064700    3932 logs.go:276] 0 containers: []
	W0708 13:07:39.064711    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:07:39.064771    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:07:39.075591    3932 logs.go:276] 1 containers: [059ae42247ca]
	I0708 13:07:39.075604    3932 logs.go:123] Gathering logs for coredns [12a2164c7181] ...
	I0708 13:07:39.075609    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a2164c7181"
	I0708 13:07:39.091726    3932 logs.go:123] Gathering logs for storage-provisioner [059ae42247ca] ...
	I0708 13:07:39.091736    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059ae42247ca"
	I0708 13:07:39.103219    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:07:39.103230    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:07:39.126108    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:07:39.126115    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:07:39.161317    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:07:39.161331    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:07:39.166266    3932 logs.go:123] Gathering logs for kube-apiserver [063efc38d81d] ...
	I0708 13:07:39.166274    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063efc38d81d"
	I0708 13:07:39.180687    3932 logs.go:123] Gathering logs for etcd [52eda3d8b3e7] ...
	I0708 13:07:39.180697    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52eda3d8b3e7"
	I0708 13:07:39.194413    3932 logs.go:123] Gathering logs for coredns [f585feadba35] ...
	I0708 13:07:39.194424    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f585feadba35"
	I0708 13:07:39.205675    3932 logs.go:123] Gathering logs for kube-scheduler [bb65792657e6] ...
	I0708 13:07:39.205687    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65792657e6"
	I0708 13:07:39.220922    3932 logs.go:123] Gathering logs for kube-proxy [814e848a6031] ...
	I0708 13:07:39.220934    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 814e848a6031"
	I0708 13:07:39.235801    3932 logs.go:123] Gathering logs for kube-controller-manager [4829cb3c03a2] ...
	I0708 13:07:39.235810    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4829cb3c03a2"
	I0708 13:07:39.253064    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:07:39.253072    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:07:39.292545    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:07:39.292553    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:07:42.904903    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:07:42.905085    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:07:42.923218    4087 logs.go:276] 2 containers: [6ea05f4d18cc 7420b58631a6]
	I0708 13:07:42.923313    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:07:42.943983    4087 logs.go:276] 2 containers: [1e89e3203798 9693310828d2]
	I0708 13:07:42.944058    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:07:42.955311    4087 logs.go:276] 1 containers: [98fa118fd098]
	I0708 13:07:42.955385    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:07:42.965776    4087 logs.go:276] 2 containers: [6dbdf148a964 d192ae42697c]
	I0708 13:07:42.965845    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:07:42.975894    4087 logs.go:276] 1 containers: [750b11fad6e2]
	I0708 13:07:42.975965    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:07:42.986143    4087 logs.go:276] 2 containers: [e8da15772873 fb1259fd60c1]
	I0708 13:07:42.986215    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:07:42.995961    4087 logs.go:276] 0 containers: []
	W0708 13:07:42.995976    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:07:42.996036    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:07:43.006217    4087 logs.go:276] 2 containers: [7d824b616b14 514c8e511812]
	I0708 13:07:43.006232    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:07:43.006238    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:07:43.018060    4087 logs.go:123] Gathering logs for kube-apiserver [6ea05f4d18cc] ...
	I0708 13:07:43.018074    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea05f4d18cc"
	I0708 13:07:43.031779    4087 logs.go:123] Gathering logs for etcd [9693310828d2] ...
	I0708 13:07:43.031791    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9693310828d2"
	I0708 13:07:43.045645    4087 logs.go:123] Gathering logs for coredns [98fa118fd098] ...
	I0708 13:07:43.045659    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98fa118fd098"
	I0708 13:07:43.056373    4087 logs.go:123] Gathering logs for kube-controller-manager [e8da15772873] ...
	I0708 13:07:43.056388    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8da15772873"
	I0708 13:07:43.074696    4087 logs.go:123] Gathering logs for kube-controller-manager [fb1259fd60c1] ...
	I0708 13:07:43.074706    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1259fd60c1"
	I0708 13:07:43.092373    4087 logs.go:123] Gathering logs for storage-provisioner [514c8e511812] ...
	I0708 13:07:43.092387    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514c8e511812"
	I0708 13:07:43.103443    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:07:43.103454    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:07:43.142916    4087 logs.go:123] Gathering logs for kube-scheduler [d192ae42697c] ...
	I0708 13:07:43.142926    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d192ae42697c"
	I0708 13:07:43.157593    4087 logs.go:123] Gathering logs for storage-provisioner [7d824b616b14] ...
	I0708 13:07:43.157606    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d824b616b14"
	I0708 13:07:43.168721    4087 logs.go:123] Gathering logs for etcd [1e89e3203798] ...
	I0708 13:07:43.168733    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e89e3203798"
	I0708 13:07:43.182540    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:07:43.182550    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:07:43.218065    4087 logs.go:123] Gathering logs for kube-apiserver [7420b58631a6] ...
	I0708 13:07:43.218079    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7420b58631a6"
	I0708 13:07:43.243322    4087 logs.go:123] Gathering logs for kube-scheduler [6dbdf148a964] ...
	I0708 13:07:43.243337    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbdf148a964"
	I0708 13:07:43.256870    4087 logs.go:123] Gathering logs for kube-proxy [750b11fad6e2] ...
	I0708 13:07:43.256879    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750b11fad6e2"
	I0708 13:07:43.276565    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:07:43.276576    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:07:43.299830    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:07:43.299841    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:07:41.804593    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:07:45.804229    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:07:46.802788    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:07:46.802985    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:07:46.818036    3932 logs.go:276] 1 containers: [063efc38d81d]
	I0708 13:07:46.818122    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:07:46.830678    3932 logs.go:276] 1 containers: [52eda3d8b3e7]
	I0708 13:07:46.830752    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:07:46.841287    3932 logs.go:276] 2 containers: [f585feadba35 12a2164c7181]
	I0708 13:07:46.841361    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:07:46.851841    3932 logs.go:276] 1 containers: [bb65792657e6]
	I0708 13:07:46.851913    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:07:46.862553    3932 logs.go:276] 1 containers: [814e848a6031]
	I0708 13:07:46.862632    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:07:46.874923    3932 logs.go:276] 1 containers: [4829cb3c03a2]
	I0708 13:07:46.874997    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:07:46.892275    3932 logs.go:276] 0 containers: []
	W0708 13:07:46.892293    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:07:46.892354    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:07:46.903247    3932 logs.go:276] 1 containers: [059ae42247ca]
	I0708 13:07:46.903262    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:07:46.903268    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:07:46.928518    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:07:46.928530    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:07:46.940203    3932 logs.go:123] Gathering logs for kube-apiserver [063efc38d81d] ...
	I0708 13:07:46.940217    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063efc38d81d"
	I0708 13:07:46.954723    3932 logs.go:123] Gathering logs for coredns [f585feadba35] ...
	I0708 13:07:46.954735    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f585feadba35"
	I0708 13:07:46.968196    3932 logs.go:123] Gathering logs for coredns [12a2164c7181] ...
	I0708 13:07:46.968208    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a2164c7181"
	I0708 13:07:46.980322    3932 logs.go:123] Gathering logs for etcd [52eda3d8b3e7] ...
	I0708 13:07:46.980332    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52eda3d8b3e7"
	I0708 13:07:46.994407    3932 logs.go:123] Gathering logs for kube-scheduler [bb65792657e6] ...
	I0708 13:07:46.994416    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65792657e6"
	I0708 13:07:47.008728    3932 logs.go:123] Gathering logs for kube-proxy [814e848a6031] ...
	I0708 13:07:47.008738    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 814e848a6031"
	I0708 13:07:47.020283    3932 logs.go:123] Gathering logs for kube-controller-manager [4829cb3c03a2] ...
	I0708 13:07:47.020294    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4829cb3c03a2"
	I0708 13:07:47.041826    3932 logs.go:123] Gathering logs for storage-provisioner [059ae42247ca] ...
	I0708 13:07:47.041839    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059ae42247ca"
	I0708 13:07:47.053504    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:07:47.053513    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:07:47.090532    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:07:47.090539    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:07:47.094658    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:07:47.094665    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:07:49.629913    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:07:50.803752    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:07:50.803955    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:07:50.816668    4087 logs.go:276] 2 containers: [6ea05f4d18cc 7420b58631a6]
	I0708 13:07:50.816752    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:07:50.827665    4087 logs.go:276] 2 containers: [1e89e3203798 9693310828d2]
	I0708 13:07:50.827745    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:07:50.838725    4087 logs.go:276] 1 containers: [98fa118fd098]
	I0708 13:07:50.838793    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:07:50.853724    4087 logs.go:276] 2 containers: [6dbdf148a964 d192ae42697c]
	I0708 13:07:50.853789    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:07:50.870878    4087 logs.go:276] 1 containers: [750b11fad6e2]
	I0708 13:07:50.870942    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:07:50.881233    4087 logs.go:276] 2 containers: [e8da15772873 fb1259fd60c1]
	I0708 13:07:50.881305    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:07:50.891547    4087 logs.go:276] 0 containers: []
	W0708 13:07:50.891560    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:07:50.891609    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:07:50.903291    4087 logs.go:276] 2 containers: [7d824b616b14 514c8e511812]
	I0708 13:07:50.903309    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:07:50.903315    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:07:50.941578    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:07:50.941593    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:07:50.945850    4087 logs.go:123] Gathering logs for storage-provisioner [514c8e511812] ...
	I0708 13:07:50.945857    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514c8e511812"
	I0708 13:07:50.957031    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:07:50.957047    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:07:50.981179    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:07:50.981188    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:07:51.019373    4087 logs.go:123] Gathering logs for kube-apiserver [7420b58631a6] ...
	I0708 13:07:51.019388    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7420b58631a6"
	I0708 13:07:51.044313    4087 logs.go:123] Gathering logs for etcd [1e89e3203798] ...
	I0708 13:07:51.044324    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e89e3203798"
	I0708 13:07:51.057751    4087 logs.go:123] Gathering logs for kube-scheduler [6dbdf148a964] ...
	I0708 13:07:51.057768    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbdf148a964"
	I0708 13:07:51.069813    4087 logs.go:123] Gathering logs for kube-scheduler [d192ae42697c] ...
	I0708 13:07:51.069825    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d192ae42697c"
	I0708 13:07:51.084343    4087 logs.go:123] Gathering logs for kube-proxy [750b11fad6e2] ...
	I0708 13:07:51.084354    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750b11fad6e2"
	I0708 13:07:51.096164    4087 logs.go:123] Gathering logs for kube-controller-manager [fb1259fd60c1] ...
	I0708 13:07:51.096174    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1259fd60c1"
	I0708 13:07:51.109925    4087 logs.go:123] Gathering logs for etcd [9693310828d2] ...
	I0708 13:07:51.109935    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9693310828d2"
	I0708 13:07:51.126907    4087 logs.go:123] Gathering logs for kube-controller-manager [e8da15772873] ...
	I0708 13:07:51.126923    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8da15772873"
	I0708 13:07:51.146097    4087 logs.go:123] Gathering logs for storage-provisioner [7d824b616b14] ...
	I0708 13:07:51.146112    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d824b616b14"
	I0708 13:07:51.158533    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:07:51.158544    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:07:51.170792    4087 logs.go:123] Gathering logs for kube-apiserver [6ea05f4d18cc] ...
	I0708 13:07:51.170804    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea05f4d18cc"
	I0708 13:07:51.185270    4087 logs.go:123] Gathering logs for coredns [98fa118fd098] ...
	I0708 13:07:51.185280    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98fa118fd098"
	I0708 13:07:53.697968    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:07:54.630077    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:07:54.630280    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:07:54.656202    3932 logs.go:276] 1 containers: [063efc38d81d]
	I0708 13:07:54.656337    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:07:54.682311    3932 logs.go:276] 1 containers: [52eda3d8b3e7]
	I0708 13:07:54.682398    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:07:54.694517    3932 logs.go:276] 2 containers: [f585feadba35 12a2164c7181]
	I0708 13:07:54.694588    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:07:54.705423    3932 logs.go:276] 1 containers: [bb65792657e6]
	I0708 13:07:54.705491    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:07:54.716061    3932 logs.go:276] 1 containers: [814e848a6031]
	I0708 13:07:54.716124    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:07:54.726258    3932 logs.go:276] 1 containers: [4829cb3c03a2]
	I0708 13:07:54.726314    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:07:54.736072    3932 logs.go:276] 0 containers: []
	W0708 13:07:54.736084    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:07:54.736130    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:07:54.746708    3932 logs.go:276] 1 containers: [059ae42247ca]
	I0708 13:07:54.746722    3932 logs.go:123] Gathering logs for kube-proxy [814e848a6031] ...
	I0708 13:07:54.746728    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 814e848a6031"
	I0708 13:07:54.758063    3932 logs.go:123] Gathering logs for kube-controller-manager [4829cb3c03a2] ...
	I0708 13:07:54.758076    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4829cb3c03a2"
	I0708 13:07:54.775792    3932 logs.go:123] Gathering logs for storage-provisioner [059ae42247ca] ...
	I0708 13:07:54.775803    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059ae42247ca"
	I0708 13:07:54.791832    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:07:54.791841    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:07:54.816752    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:07:54.816760    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:07:54.856310    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:07:54.856322    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:07:54.895487    3932 logs.go:123] Gathering logs for etcd [52eda3d8b3e7] ...
	I0708 13:07:54.895498    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52eda3d8b3e7"
	I0708 13:07:54.912122    3932 logs.go:123] Gathering logs for kube-scheduler [bb65792657e6] ...
	I0708 13:07:54.912135    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65792657e6"
	I0708 13:07:54.927011    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:07:54.927021    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:07:54.938317    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:07:54.938328    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:07:54.942655    3932 logs.go:123] Gathering logs for kube-apiserver [063efc38d81d] ...
	I0708 13:07:54.942664    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063efc38d81d"
	I0708 13:07:54.956954    3932 logs.go:123] Gathering logs for coredns [f585feadba35] ...
	I0708 13:07:54.956964    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f585feadba35"
	I0708 13:07:54.969134    3932 logs.go:123] Gathering logs for coredns [12a2164c7181] ...
	I0708 13:07:54.969146    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a2164c7181"
	I0708 13:07:58.698517    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:07:58.698607    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:07:58.709878    4087 logs.go:276] 2 containers: [6ea05f4d18cc 7420b58631a6]
	I0708 13:07:58.709953    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:07:58.720649    4087 logs.go:276] 2 containers: [1e89e3203798 9693310828d2]
	I0708 13:07:58.720722    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:07:58.732014    4087 logs.go:276] 1 containers: [98fa118fd098]
	I0708 13:07:58.732082    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:07:58.742782    4087 logs.go:276] 2 containers: [6dbdf148a964 d192ae42697c]
	I0708 13:07:58.742858    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:07:58.753603    4087 logs.go:276] 1 containers: [750b11fad6e2]
	I0708 13:07:58.753676    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:07:58.773005    4087 logs.go:276] 2 containers: [e8da15772873 fb1259fd60c1]
	I0708 13:07:58.773075    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:07:58.783035    4087 logs.go:276] 0 containers: []
	W0708 13:07:58.783046    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:07:58.783106    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:07:58.793449    4087 logs.go:276] 2 containers: [7d824b616b14 514c8e511812]
	I0708 13:07:58.793468    4087 logs.go:123] Gathering logs for kube-apiserver [7420b58631a6] ...
	I0708 13:07:58.793474    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7420b58631a6"
	I0708 13:07:58.818082    4087 logs.go:123] Gathering logs for etcd [9693310828d2] ...
	I0708 13:07:58.818092    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9693310828d2"
	I0708 13:07:58.832850    4087 logs.go:123] Gathering logs for kube-controller-manager [fb1259fd60c1] ...
	I0708 13:07:58.832861    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1259fd60c1"
	I0708 13:07:58.846545    4087 logs.go:123] Gathering logs for storage-provisioner [7d824b616b14] ...
	I0708 13:07:58.846554    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d824b616b14"
	I0708 13:07:58.860868    4087 logs.go:123] Gathering logs for storage-provisioner [514c8e511812] ...
	I0708 13:07:58.860878    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514c8e511812"
	I0708 13:07:58.873084    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:07:58.873094    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:07:58.895982    4087 logs.go:123] Gathering logs for etcd [1e89e3203798] ...
	I0708 13:07:58.895992    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e89e3203798"
	I0708 13:07:58.915091    4087 logs.go:123] Gathering logs for kube-proxy [750b11fad6e2] ...
	I0708 13:07:58.915102    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750b11fad6e2"
	I0708 13:07:58.928282    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:07:58.928294    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:07:58.967681    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:07:58.967690    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:07:59.002564    4087 logs.go:123] Gathering logs for kube-apiserver [6ea05f4d18cc] ...
	I0708 13:07:59.002575    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea05f4d18cc"
	I0708 13:07:59.016526    4087 logs.go:123] Gathering logs for coredns [98fa118fd098] ...
	I0708 13:07:59.016537    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98fa118fd098"
	I0708 13:07:59.028798    4087 logs.go:123] Gathering logs for kube-scheduler [d192ae42697c] ...
	I0708 13:07:59.028809    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d192ae42697c"
	I0708 13:07:59.044052    4087 logs.go:123] Gathering logs for kube-controller-manager [e8da15772873] ...
	I0708 13:07:59.044062    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8da15772873"
	I0708 13:07:59.061043    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:07:59.061054    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:07:59.072593    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:07:59.072603    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:07:59.076778    4087 logs.go:123] Gathering logs for kube-scheduler [6dbdf148a964] ...
	I0708 13:07:59.076784    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbdf148a964"
	I0708 13:07:57.481960    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:08:01.590007    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:08:02.483102    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:08:02.483522    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:08:02.521816    3932 logs.go:276] 1 containers: [063efc38d81d]
	I0708 13:08:02.521994    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:08:02.546323    3932 logs.go:276] 1 containers: [52eda3d8b3e7]
	I0708 13:08:02.546414    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:08:02.561512    3932 logs.go:276] 4 containers: [77c0e4961f2a 63e36cf27807 f585feadba35 12a2164c7181]
	I0708 13:08:02.561600    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:08:02.574101    3932 logs.go:276] 1 containers: [bb65792657e6]
	I0708 13:08:02.574174    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:08:02.584505    3932 logs.go:276] 1 containers: [814e848a6031]
	I0708 13:08:02.584578    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:08:02.595622    3932 logs.go:276] 1 containers: [4829cb3c03a2]
	I0708 13:08:02.595692    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:08:02.607501    3932 logs.go:276] 0 containers: []
	W0708 13:08:02.607520    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:08:02.607582    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:08:02.621754    3932 logs.go:276] 1 containers: [059ae42247ca]
	I0708 13:08:02.621775    3932 logs.go:123] Gathering logs for etcd [52eda3d8b3e7] ...
	I0708 13:08:02.621780    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52eda3d8b3e7"
	I0708 13:08:02.638067    3932 logs.go:123] Gathering logs for coredns [12a2164c7181] ...
	I0708 13:08:02.638079    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a2164c7181"
	I0708 13:08:02.650268    3932 logs.go:123] Gathering logs for kube-scheduler [bb65792657e6] ...
	I0708 13:08:02.650278    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65792657e6"
	I0708 13:08:02.665121    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:08:02.665131    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:08:02.705867    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:08:02.705879    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:08:02.742278    3932 logs.go:123] Gathering logs for coredns [f585feadba35] ...
	I0708 13:08:02.742292    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f585feadba35"
	I0708 13:08:02.753872    3932 logs.go:123] Gathering logs for storage-provisioner [059ae42247ca] ...
	I0708 13:08:02.753882    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059ae42247ca"
	I0708 13:08:02.765675    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:08:02.765686    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:08:02.770431    3932 logs.go:123] Gathering logs for kube-apiserver [063efc38d81d] ...
	I0708 13:08:02.770439    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063efc38d81d"
	I0708 13:08:02.784865    3932 logs.go:123] Gathering logs for coredns [63e36cf27807] ...
	I0708 13:08:02.784876    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e36cf27807"
	I0708 13:08:02.796186    3932 logs.go:123] Gathering logs for kube-controller-manager [4829cb3c03a2] ...
	I0708 13:08:02.796198    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4829cb3c03a2"
	I0708 13:08:02.813921    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:08:02.813931    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:08:02.838443    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:08:02.838454    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:08:02.849857    3932 logs.go:123] Gathering logs for coredns [77c0e4961f2a] ...
	I0708 13:08:02.849870    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77c0e4961f2a"
	I0708 13:08:02.863811    3932 logs.go:123] Gathering logs for kube-proxy [814e848a6031] ...
	I0708 13:08:02.863823    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 814e848a6031"
	I0708 13:08:05.377807    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:08:06.591202    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:08:06.591420    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:08:06.610509    4087 logs.go:276] 2 containers: [6ea05f4d18cc 7420b58631a6]
	I0708 13:08:06.610608    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:08:06.628066    4087 logs.go:276] 2 containers: [1e89e3203798 9693310828d2]
	I0708 13:08:06.628136    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:08:06.639716    4087 logs.go:276] 1 containers: [98fa118fd098]
	I0708 13:08:06.639782    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:08:06.650625    4087 logs.go:276] 2 containers: [6dbdf148a964 d192ae42697c]
	I0708 13:08:06.650700    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:08:06.665806    4087 logs.go:276] 1 containers: [750b11fad6e2]
	I0708 13:08:06.665882    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:08:06.681945    4087 logs.go:276] 2 containers: [e8da15772873 fb1259fd60c1]
	I0708 13:08:06.682016    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:08:06.692267    4087 logs.go:276] 0 containers: []
	W0708 13:08:06.692278    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:08:06.692333    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:08:06.707193    4087 logs.go:276] 2 containers: [7d824b616b14 514c8e511812]
	I0708 13:08:06.707210    4087 logs.go:123] Gathering logs for kube-apiserver [6ea05f4d18cc] ...
	I0708 13:08:06.707215    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea05f4d18cc"
	I0708 13:08:06.721059    4087 logs.go:123] Gathering logs for kube-controller-manager [e8da15772873] ...
	I0708 13:08:06.721070    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8da15772873"
	I0708 13:08:06.742425    4087 logs.go:123] Gathering logs for storage-provisioner [514c8e511812] ...
	I0708 13:08:06.742436    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514c8e511812"
	I0708 13:08:06.754037    4087 logs.go:123] Gathering logs for kube-apiserver [7420b58631a6] ...
	I0708 13:08:06.754048    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7420b58631a6"
	I0708 13:08:06.786887    4087 logs.go:123] Gathering logs for kube-proxy [750b11fad6e2] ...
	I0708 13:08:06.786897    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750b11fad6e2"
	I0708 13:08:06.798363    4087 logs.go:123] Gathering logs for kube-scheduler [6dbdf148a964] ...
	I0708 13:08:06.798373    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbdf148a964"
	I0708 13:08:06.809991    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:08:06.810001    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:08:06.823075    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:08:06.823086    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:08:06.863153    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:08:06.863173    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:08:06.867642    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:08:06.867649    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:08:06.902713    4087 logs.go:123] Gathering logs for etcd [9693310828d2] ...
	I0708 13:08:06.902724    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9693310828d2"
	I0708 13:08:06.917094    4087 logs.go:123] Gathering logs for storage-provisioner [7d824b616b14] ...
	I0708 13:08:06.917104    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d824b616b14"
	I0708 13:08:06.928787    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:08:06.928797    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:08:06.951819    4087 logs.go:123] Gathering logs for etcd [1e89e3203798] ...
	I0708 13:08:06.951827    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e89e3203798"
	I0708 13:08:06.965846    4087 logs.go:123] Gathering logs for coredns [98fa118fd098] ...
	I0708 13:08:06.965857    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98fa118fd098"
	I0708 13:08:06.977538    4087 logs.go:123] Gathering logs for kube-scheduler [d192ae42697c] ...
	I0708 13:08:06.977550    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d192ae42697c"
	I0708 13:08:06.994216    4087 logs.go:123] Gathering logs for kube-controller-manager [fb1259fd60c1] ...
	I0708 13:08:06.994227    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1259fd60c1"
	I0708 13:08:09.509239    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:08:10.378264    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:08:10.378464    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:08:10.402838    3932 logs.go:276] 1 containers: [063efc38d81d]
	I0708 13:08:10.402957    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:08:10.422438    3932 logs.go:276] 1 containers: [52eda3d8b3e7]
	I0708 13:08:10.422531    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:08:10.434683    3932 logs.go:276] 4 containers: [77c0e4961f2a 63e36cf27807 f585feadba35 12a2164c7181]
	I0708 13:08:10.434752    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:08:10.445438    3932 logs.go:276] 1 containers: [bb65792657e6]
	I0708 13:08:10.445510    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:08:10.455982    3932 logs.go:276] 1 containers: [814e848a6031]
	I0708 13:08:10.456045    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:08:10.466556    3932 logs.go:276] 1 containers: [4829cb3c03a2]
	I0708 13:08:10.466619    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:08:10.476764    3932 logs.go:276] 0 containers: []
	W0708 13:08:10.476774    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:08:10.476830    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:08:10.487459    3932 logs.go:276] 1 containers: [059ae42247ca]
	I0708 13:08:10.487477    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:08:10.487482    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:08:10.499797    3932 logs.go:123] Gathering logs for coredns [63e36cf27807] ...
	I0708 13:08:10.499809    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e36cf27807"
	I0708 13:08:10.511523    3932 logs.go:123] Gathering logs for kube-scheduler [bb65792657e6] ...
	I0708 13:08:10.511534    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65792657e6"
	I0708 13:08:10.526362    3932 logs.go:123] Gathering logs for kube-proxy [814e848a6031] ...
	I0708 13:08:10.526374    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 814e848a6031"
	I0708 13:08:10.539410    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:08:10.539421    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:08:10.544151    3932 logs.go:123] Gathering logs for kube-apiserver [063efc38d81d] ...
	I0708 13:08:10.544160    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063efc38d81d"
	I0708 13:08:10.558425    3932 logs.go:123] Gathering logs for coredns [12a2164c7181] ...
	I0708 13:08:10.558435    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a2164c7181"
	I0708 13:08:10.571631    3932 logs.go:123] Gathering logs for etcd [52eda3d8b3e7] ...
	I0708 13:08:10.571642    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52eda3d8b3e7"
	I0708 13:08:10.585447    3932 logs.go:123] Gathering logs for storage-provisioner [059ae42247ca] ...
	I0708 13:08:10.585457    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059ae42247ca"
	I0708 13:08:10.607732    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:08:10.607745    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:08:10.635371    3932 logs.go:123] Gathering logs for coredns [f585feadba35] ...
	I0708 13:08:10.635397    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f585feadba35"
	I0708 13:08:10.649670    3932 logs.go:123] Gathering logs for kube-controller-manager [4829cb3c03a2] ...
	I0708 13:08:10.649686    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4829cb3c03a2"
	I0708 13:08:10.689853    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:08:10.689865    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:08:10.729270    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:08:10.729283    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:08:10.766803    3932 logs.go:123] Gathering logs for coredns [77c0e4961f2a] ...
	I0708 13:08:10.766815    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77c0e4961f2a"
	I0708 13:08:14.510851    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:08:14.511052    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:08:14.532505    4087 logs.go:276] 2 containers: [6ea05f4d18cc 7420b58631a6]
	I0708 13:08:14.532598    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:08:14.547984    4087 logs.go:276] 2 containers: [1e89e3203798 9693310828d2]
	I0708 13:08:14.548048    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:08:14.560787    4087 logs.go:276] 1 containers: [98fa118fd098]
	I0708 13:08:14.560848    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:08:14.572978    4087 logs.go:276] 2 containers: [6dbdf148a964 d192ae42697c]
	I0708 13:08:14.573064    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:08:14.586007    4087 logs.go:276] 1 containers: [750b11fad6e2]
	I0708 13:08:14.586080    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:08:14.596801    4087 logs.go:276] 2 containers: [e8da15772873 fb1259fd60c1]
	I0708 13:08:14.596865    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:08:14.607157    4087 logs.go:276] 0 containers: []
	W0708 13:08:14.607170    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:08:14.607229    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:08:13.284937    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:08:14.618082    4087 logs.go:276] 2 containers: [7d824b616b14 514c8e511812]
	I0708 13:08:14.618105    4087 logs.go:123] Gathering logs for kube-controller-manager [fb1259fd60c1] ...
	I0708 13:08:14.618111    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1259fd60c1"
	I0708 13:08:14.632018    4087 logs.go:123] Gathering logs for kube-apiserver [7420b58631a6] ...
	I0708 13:08:14.632030    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7420b58631a6"
	I0708 13:08:14.661578    4087 logs.go:123] Gathering logs for kube-scheduler [d192ae42697c] ...
	I0708 13:08:14.661589    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d192ae42697c"
	I0708 13:08:14.678683    4087 logs.go:123] Gathering logs for kube-proxy [750b11fad6e2] ...
	I0708 13:08:14.678697    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750b11fad6e2"
	I0708 13:08:14.690405    4087 logs.go:123] Gathering logs for kube-apiserver [6ea05f4d18cc] ...
	I0708 13:08:14.690419    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea05f4d18cc"
	I0708 13:08:14.704950    4087 logs.go:123] Gathering logs for etcd [1e89e3203798] ...
	I0708 13:08:14.704964    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e89e3203798"
	I0708 13:08:14.718676    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:08:14.718687    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:08:14.742315    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:08:14.742322    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:08:14.753885    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:08:14.753899    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:08:14.791967    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:08:14.791976    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:08:14.796440    4087 logs.go:123] Gathering logs for coredns [98fa118fd098] ...
	I0708 13:08:14.796448    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98fa118fd098"
	I0708 13:08:14.808010    4087 logs.go:123] Gathering logs for kube-controller-manager [e8da15772873] ...
	I0708 13:08:14.808022    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8da15772873"
	I0708 13:08:14.825688    4087 logs.go:123] Gathering logs for storage-provisioner [7d824b616b14] ...
	I0708 13:08:14.825698    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d824b616b14"
	I0708 13:08:14.837403    4087 logs.go:123] Gathering logs for storage-provisioner [514c8e511812] ...
	I0708 13:08:14.837415    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514c8e511812"
	I0708 13:08:14.848443    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:08:14.848458    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:08:14.885487    4087 logs.go:123] Gathering logs for etcd [9693310828d2] ...
	I0708 13:08:14.885497    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9693310828d2"
	I0708 13:08:14.900041    4087 logs.go:123] Gathering logs for kube-scheduler [6dbdf148a964] ...
	I0708 13:08:14.900051    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbdf148a964"
	I0708 13:08:17.414861    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:08:18.286813    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:08:18.287232    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:08:18.318306    3932 logs.go:276] 1 containers: [063efc38d81d]
	I0708 13:08:18.318434    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:08:18.336831    3932 logs.go:276] 1 containers: [52eda3d8b3e7]
	I0708 13:08:18.336927    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:08:18.351470    3932 logs.go:276] 4 containers: [77c0e4961f2a 63e36cf27807 f585feadba35 12a2164c7181]
	I0708 13:08:18.351543    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:08:18.363073    3932 logs.go:276] 1 containers: [bb65792657e6]
	I0708 13:08:18.363130    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:08:18.374023    3932 logs.go:276] 1 containers: [814e848a6031]
	I0708 13:08:18.374090    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:08:18.385163    3932 logs.go:276] 1 containers: [4829cb3c03a2]
	I0708 13:08:18.385223    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:08:18.396281    3932 logs.go:276] 0 containers: []
	W0708 13:08:18.396292    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:08:18.396349    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:08:18.407377    3932 logs.go:276] 1 containers: [059ae42247ca]
	I0708 13:08:18.407395    3932 logs.go:123] Gathering logs for etcd [52eda3d8b3e7] ...
	I0708 13:08:18.407401    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52eda3d8b3e7"
	I0708 13:08:18.422224    3932 logs.go:123] Gathering logs for kube-scheduler [bb65792657e6] ...
	I0708 13:08:18.422234    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65792657e6"
	I0708 13:08:18.437311    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:08:18.437321    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:08:18.474008    3932 logs.go:123] Gathering logs for kube-proxy [814e848a6031] ...
	I0708 13:08:18.474018    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 814e848a6031"
	I0708 13:08:18.486369    3932 logs.go:123] Gathering logs for storage-provisioner [059ae42247ca] ...
	I0708 13:08:18.486382    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059ae42247ca"
	I0708 13:08:18.497414    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:08:18.497424    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:08:18.508856    3932 logs.go:123] Gathering logs for coredns [f585feadba35] ...
	I0708 13:08:18.508867    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f585feadba35"
	I0708 13:08:18.520238    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:08:18.520248    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:08:18.545582    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:08:18.545591    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:08:18.584562    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:08:18.584570    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:08:18.589067    3932 logs.go:123] Gathering logs for kube-apiserver [063efc38d81d] ...
	I0708 13:08:18.589073    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063efc38d81d"
	I0708 13:08:18.602968    3932 logs.go:123] Gathering logs for coredns [77c0e4961f2a] ...
	I0708 13:08:18.602981    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77c0e4961f2a"
	I0708 13:08:18.614936    3932 logs.go:123] Gathering logs for coredns [63e36cf27807] ...
	I0708 13:08:18.614947    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e36cf27807"
	I0708 13:08:18.626388    3932 logs.go:123] Gathering logs for coredns [12a2164c7181] ...
	I0708 13:08:18.626398    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a2164c7181"
	I0708 13:08:18.638110    3932 logs.go:123] Gathering logs for kube-controller-manager [4829cb3c03a2] ...
	I0708 13:08:18.638119    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4829cb3c03a2"
	I0708 13:08:22.416871    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:08:22.417204    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:08:22.451379    4087 logs.go:276] 2 containers: [6ea05f4d18cc 7420b58631a6]
	I0708 13:08:22.451508    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:08:22.469627    4087 logs.go:276] 2 containers: [1e89e3203798 9693310828d2]
	I0708 13:08:22.469714    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:08:22.484003    4087 logs.go:276] 1 containers: [98fa118fd098]
	I0708 13:08:22.484083    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:08:22.496368    4087 logs.go:276] 2 containers: [6dbdf148a964 d192ae42697c]
	I0708 13:08:22.496433    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:08:22.506790    4087 logs.go:276] 1 containers: [750b11fad6e2]
	I0708 13:08:22.506863    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:08:22.517415    4087 logs.go:276] 2 containers: [e8da15772873 fb1259fd60c1]
	I0708 13:08:22.517494    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:08:22.530281    4087 logs.go:276] 0 containers: []
	W0708 13:08:22.530292    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:08:22.530350    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:08:22.541339    4087 logs.go:276] 2 containers: [7d824b616b14 514c8e511812]
	I0708 13:08:22.541357    4087 logs.go:123] Gathering logs for kube-proxy [750b11fad6e2] ...
	I0708 13:08:22.541365    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750b11fad6e2"
	I0708 13:08:22.553128    4087 logs.go:123] Gathering logs for kube-controller-manager [fb1259fd60c1] ...
	I0708 13:08:22.553140    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1259fd60c1"
	I0708 13:08:22.568124    4087 logs.go:123] Gathering logs for storage-provisioner [514c8e511812] ...
	I0708 13:08:22.568137    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514c8e511812"
	I0708 13:08:22.579728    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:08:22.579739    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:08:22.584092    4087 logs.go:123] Gathering logs for coredns [98fa118fd098] ...
	I0708 13:08:22.584101    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98fa118fd098"
	I0708 13:08:22.595777    4087 logs.go:123] Gathering logs for kube-scheduler [6dbdf148a964] ...
	I0708 13:08:22.595792    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbdf148a964"
	I0708 13:08:22.610324    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:08:22.610336    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:08:22.635594    4087 logs.go:123] Gathering logs for etcd [9693310828d2] ...
	I0708 13:08:22.635604    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9693310828d2"
	I0708 13:08:22.650655    4087 logs.go:123] Gathering logs for kube-scheduler [d192ae42697c] ...
	I0708 13:08:22.650669    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d192ae42697c"
	I0708 13:08:22.665784    4087 logs.go:123] Gathering logs for storage-provisioner [7d824b616b14] ...
	I0708 13:08:22.665800    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d824b616b14"
	I0708 13:08:22.685201    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:08:22.685212    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:08:22.704466    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:08:22.704479    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:08:22.745921    4087 logs.go:123] Gathering logs for kube-apiserver [6ea05f4d18cc] ...
	I0708 13:08:22.745941    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea05f4d18cc"
	I0708 13:08:22.760701    4087 logs.go:123] Gathering logs for etcd [1e89e3203798] ...
	I0708 13:08:22.760716    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e89e3203798"
	I0708 13:08:22.780514    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:08:22.780526    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:08:22.816432    4087 logs.go:123] Gathering logs for kube-apiserver [7420b58631a6] ...
	I0708 13:08:22.816443    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7420b58631a6"
	I0708 13:08:22.841800    4087 logs.go:123] Gathering logs for kube-controller-manager [e8da15772873] ...
	I0708 13:08:22.841810    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8da15772873"
	I0708 13:08:21.157132    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:08:25.362927    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:08:26.159238    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:08:26.159724    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:08:26.200094    3932 logs.go:276] 1 containers: [063efc38d81d]
	I0708 13:08:26.200238    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:08:26.225043    3932 logs.go:276] 1 containers: [52eda3d8b3e7]
	I0708 13:08:26.225136    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:08:26.244121    3932 logs.go:276] 4 containers: [77c0e4961f2a 63e36cf27807 f585feadba35 12a2164c7181]
	I0708 13:08:26.244199    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:08:26.256975    3932 logs.go:276] 1 containers: [bb65792657e6]
	I0708 13:08:26.257049    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:08:26.268158    3932 logs.go:276] 1 containers: [814e848a6031]
	I0708 13:08:26.268227    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:08:26.280044    3932 logs.go:276] 1 containers: [4829cb3c03a2]
	I0708 13:08:26.280120    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:08:26.291275    3932 logs.go:276] 0 containers: []
	W0708 13:08:26.291288    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:08:26.291353    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:08:26.303080    3932 logs.go:276] 1 containers: [059ae42247ca]
	I0708 13:08:26.303099    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:08:26.303105    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:08:26.339199    3932 logs.go:123] Gathering logs for coredns [f585feadba35] ...
	I0708 13:08:26.339212    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f585feadba35"
	I0708 13:08:26.351984    3932 logs.go:123] Gathering logs for storage-provisioner [059ae42247ca] ...
	I0708 13:08:26.351995    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059ae42247ca"
	I0708 13:08:26.364008    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:08:26.364023    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:08:26.387448    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:08:26.387455    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:08:26.424952    3932 logs.go:123] Gathering logs for kube-apiserver [063efc38d81d] ...
	I0708 13:08:26.424959    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063efc38d81d"
	I0708 13:08:26.439961    3932 logs.go:123] Gathering logs for coredns [77c0e4961f2a] ...
	I0708 13:08:26.439971    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77c0e4961f2a"
	I0708 13:08:26.452239    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:08:26.452249    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:08:26.456683    3932 logs.go:123] Gathering logs for coredns [63e36cf27807] ...
	I0708 13:08:26.456692    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e36cf27807"
	I0708 13:08:26.468452    3932 logs.go:123] Gathering logs for coredns [12a2164c7181] ...
	I0708 13:08:26.468465    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a2164c7181"
	I0708 13:08:26.480945    3932 logs.go:123] Gathering logs for kube-scheduler [bb65792657e6] ...
	I0708 13:08:26.480955    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65792657e6"
	I0708 13:08:26.495990    3932 logs.go:123] Gathering logs for kube-controller-manager [4829cb3c03a2] ...
	I0708 13:08:26.496000    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4829cb3c03a2"
	I0708 13:08:26.516055    3932 logs.go:123] Gathering logs for etcd [52eda3d8b3e7] ...
	I0708 13:08:26.516064    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52eda3d8b3e7"
	I0708 13:08:26.530530    3932 logs.go:123] Gathering logs for kube-proxy [814e848a6031] ...
	I0708 13:08:26.530539    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 814e848a6031"
	I0708 13:08:26.547338    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:08:26.547350    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:08:29.059225    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:08:30.364909    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:08:30.365113    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:08:30.379857    4087 logs.go:276] 2 containers: [6ea05f4d18cc 7420b58631a6]
	I0708 13:08:30.379941    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:08:30.391545    4087 logs.go:276] 2 containers: [1e89e3203798 9693310828d2]
	I0708 13:08:30.391621    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:08:30.401611    4087 logs.go:276] 1 containers: [98fa118fd098]
	I0708 13:08:30.401683    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:08:30.412206    4087 logs.go:276] 2 containers: [6dbdf148a964 d192ae42697c]
	I0708 13:08:30.412280    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:08:30.422292    4087 logs.go:276] 1 containers: [750b11fad6e2]
	I0708 13:08:30.422368    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:08:30.432819    4087 logs.go:276] 2 containers: [e8da15772873 fb1259fd60c1]
	I0708 13:08:30.432891    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:08:30.442471    4087 logs.go:276] 0 containers: []
	W0708 13:08:30.442483    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:08:30.442539    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:08:30.460625    4087 logs.go:276] 2 containers: [7d824b616b14 514c8e511812]
	I0708 13:08:30.460652    4087 logs.go:123] Gathering logs for etcd [1e89e3203798] ...
	I0708 13:08:30.460658    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e89e3203798"
	I0708 13:08:30.474848    4087 logs.go:123] Gathering logs for coredns [98fa118fd098] ...
	I0708 13:08:30.474858    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98fa118fd098"
	I0708 13:08:30.486130    4087 logs.go:123] Gathering logs for kube-proxy [750b11fad6e2] ...
	I0708 13:08:30.486145    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750b11fad6e2"
	I0708 13:08:30.509034    4087 logs.go:123] Gathering logs for storage-provisioner [514c8e511812] ...
	I0708 13:08:30.509048    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514c8e511812"
	I0708 13:08:30.520617    4087 logs.go:123] Gathering logs for kube-apiserver [6ea05f4d18cc] ...
	I0708 13:08:30.520628    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea05f4d18cc"
	I0708 13:08:30.534410    4087 logs.go:123] Gathering logs for kube-apiserver [7420b58631a6] ...
	I0708 13:08:30.534422    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7420b58631a6"
	I0708 13:08:30.559865    4087 logs.go:123] Gathering logs for kube-controller-manager [e8da15772873] ...
	I0708 13:08:30.559876    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8da15772873"
	I0708 13:08:30.577397    4087 logs.go:123] Gathering logs for kube-controller-manager [fb1259fd60c1] ...
	I0708 13:08:30.577412    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1259fd60c1"
	I0708 13:08:30.591913    4087 logs.go:123] Gathering logs for storage-provisioner [7d824b616b14] ...
	I0708 13:08:30.591922    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d824b616b14"
	I0708 13:08:30.603205    4087 logs.go:123] Gathering logs for kube-scheduler [6dbdf148a964] ...
	I0708 13:08:30.603217    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbdf148a964"
	I0708 13:08:30.615930    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:08:30.615942    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:08:30.627402    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:08:30.627412    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:08:30.665381    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:08:30.665390    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:08:30.669483    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:08:30.669491    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:08:30.704007    4087 logs.go:123] Gathering logs for etcd [9693310828d2] ...
	I0708 13:08:30.704019    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9693310828d2"
	I0708 13:08:30.718451    4087 logs.go:123] Gathering logs for kube-scheduler [d192ae42697c] ...
	I0708 13:08:30.718462    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d192ae42697c"
	I0708 13:08:30.732974    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:08:30.732987    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:08:33.258544    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:08:34.059225    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:08:34.059372    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:08:34.079118    3932 logs.go:276] 1 containers: [063efc38d81d]
	I0708 13:08:34.079213    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:08:34.097539    3932 logs.go:276] 1 containers: [52eda3d8b3e7]
	I0708 13:08:34.097611    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:08:34.109240    3932 logs.go:276] 4 containers: [77c0e4961f2a 63e36cf27807 f585feadba35 12a2164c7181]
	I0708 13:08:34.109315    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:08:34.119703    3932 logs.go:276] 1 containers: [bb65792657e6]
	I0708 13:08:34.119773    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:08:34.130000    3932 logs.go:276] 1 containers: [814e848a6031]
	I0708 13:08:34.130066    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:08:34.140464    3932 logs.go:276] 1 containers: [4829cb3c03a2]
	I0708 13:08:34.140530    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:08:34.150432    3932 logs.go:276] 0 containers: []
	W0708 13:08:34.150443    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:08:34.150499    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:08:34.165448    3932 logs.go:276] 1 containers: [059ae42247ca]
	I0708 13:08:34.165465    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:08:34.165471    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:08:34.201958    3932 logs.go:123] Gathering logs for coredns [f585feadba35] ...
	I0708 13:08:34.201970    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f585feadba35"
	I0708 13:08:34.214467    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:08:34.214480    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:08:34.238664    3932 logs.go:123] Gathering logs for coredns [12a2164c7181] ...
	I0708 13:08:34.238674    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a2164c7181"
	I0708 13:08:34.250442    3932 logs.go:123] Gathering logs for kube-scheduler [bb65792657e6] ...
	I0708 13:08:34.250453    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65792657e6"
	I0708 13:08:34.265507    3932 logs.go:123] Gathering logs for kube-apiserver [063efc38d81d] ...
	I0708 13:08:34.265517    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063efc38d81d"
	I0708 13:08:34.280115    3932 logs.go:123] Gathering logs for etcd [52eda3d8b3e7] ...
	I0708 13:08:34.280125    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52eda3d8b3e7"
	I0708 13:08:34.293695    3932 logs.go:123] Gathering logs for coredns [77c0e4961f2a] ...
	I0708 13:08:34.293704    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77c0e4961f2a"
	I0708 13:08:34.305139    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:08:34.305149    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:08:34.343754    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:08:34.343764    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:08:34.348061    3932 logs.go:123] Gathering logs for storage-provisioner [059ae42247ca] ...
	I0708 13:08:34.348069    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059ae42247ca"
	I0708 13:08:34.359782    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:08:34.359793    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:08:34.371431    3932 logs.go:123] Gathering logs for coredns [63e36cf27807] ...
	I0708 13:08:34.371441    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e36cf27807"
	I0708 13:08:34.390159    3932 logs.go:123] Gathering logs for kube-proxy [814e848a6031] ...
	I0708 13:08:34.390168    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 814e848a6031"
	I0708 13:08:34.401539    3932 logs.go:123] Gathering logs for kube-controller-manager [4829cb3c03a2] ...
	I0708 13:08:34.401548    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4829cb3c03a2"
	I0708 13:08:38.260650    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:08:38.260819    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:08:38.273393    4087 logs.go:276] 2 containers: [6ea05f4d18cc 7420b58631a6]
	I0708 13:08:38.273474    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:08:38.288435    4087 logs.go:276] 2 containers: [1e89e3203798 9693310828d2]
	I0708 13:08:38.288506    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:08:38.299196    4087 logs.go:276] 1 containers: [98fa118fd098]
	I0708 13:08:38.299262    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:08:38.309978    4087 logs.go:276] 2 containers: [6dbdf148a964 d192ae42697c]
	I0708 13:08:38.310051    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:08:38.320875    4087 logs.go:276] 1 containers: [750b11fad6e2]
	I0708 13:08:38.320946    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:08:38.331696    4087 logs.go:276] 2 containers: [e8da15772873 fb1259fd60c1]
	I0708 13:08:38.331766    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:08:38.342044    4087 logs.go:276] 0 containers: []
	W0708 13:08:38.342055    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:08:38.342111    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:08:38.352992    4087 logs.go:276] 2 containers: [7d824b616b14 514c8e511812]
	I0708 13:08:38.353011    4087 logs.go:123] Gathering logs for kube-apiserver [6ea05f4d18cc] ...
	I0708 13:08:38.353017    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea05f4d18cc"
	I0708 13:08:38.368086    4087 logs.go:123] Gathering logs for kube-scheduler [6dbdf148a964] ...
	I0708 13:08:38.368096    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbdf148a964"
	I0708 13:08:38.379747    4087 logs.go:123] Gathering logs for storage-provisioner [7d824b616b14] ...
	I0708 13:08:38.379756    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d824b616b14"
	I0708 13:08:38.392309    4087 logs.go:123] Gathering logs for kube-controller-manager [fb1259fd60c1] ...
	I0708 13:08:38.392320    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1259fd60c1"
	I0708 13:08:38.406052    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:08:38.406064    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:08:38.441776    4087 logs.go:123] Gathering logs for kube-apiserver [7420b58631a6] ...
	I0708 13:08:38.441788    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7420b58631a6"
	I0708 13:08:38.465993    4087 logs.go:123] Gathering logs for etcd [9693310828d2] ...
	I0708 13:08:38.466003    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9693310828d2"
	I0708 13:08:38.481851    4087 logs.go:123] Gathering logs for kube-proxy [750b11fad6e2] ...
	I0708 13:08:38.481863    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750b11fad6e2"
	I0708 13:08:38.494329    4087 logs.go:123] Gathering logs for kube-controller-manager [e8da15772873] ...
	I0708 13:08:38.494339    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8da15772873"
	I0708 13:08:38.511319    4087 logs.go:123] Gathering logs for storage-provisioner [514c8e511812] ...
	I0708 13:08:38.511329    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514c8e511812"
	I0708 13:08:38.522603    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:08:38.522613    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:08:38.544740    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:08:38.544748    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:08:38.556423    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:08:38.556434    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:08:38.595042    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:08:38.595053    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:08:38.599054    4087 logs.go:123] Gathering logs for etcd [1e89e3203798] ...
	I0708 13:08:38.599062    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e89e3203798"
	I0708 13:08:38.613054    4087 logs.go:123] Gathering logs for coredns [98fa118fd098] ...
	I0708 13:08:38.613066    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98fa118fd098"
	I0708 13:08:38.625212    4087 logs.go:123] Gathering logs for kube-scheduler [d192ae42697c] ...
	I0708 13:08:38.625224    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d192ae42697c"
	I0708 13:08:36.925325    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:08:41.144195    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:08:41.927344    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:08:41.927441    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:08:41.939042    3932 logs.go:276] 1 containers: [063efc38d81d]
	I0708 13:08:41.939110    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:08:41.949629    3932 logs.go:276] 1 containers: [52eda3d8b3e7]
	I0708 13:08:41.949687    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:08:41.960632    3932 logs.go:276] 4 containers: [77c0e4961f2a 63e36cf27807 f585feadba35 12a2164c7181]
	I0708 13:08:41.960702    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:08:41.971070    3932 logs.go:276] 1 containers: [bb65792657e6]
	I0708 13:08:41.971131    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:08:41.981520    3932 logs.go:276] 1 containers: [814e848a6031]
	I0708 13:08:41.981590    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:08:41.995857    3932 logs.go:276] 1 containers: [4829cb3c03a2]
	I0708 13:08:41.995920    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:08:42.006295    3932 logs.go:276] 0 containers: []
	W0708 13:08:42.006306    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:08:42.006366    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:08:42.016846    3932 logs.go:276] 1 containers: [059ae42247ca]
	I0708 13:08:42.016861    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:08:42.016866    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:08:42.051250    3932 logs.go:123] Gathering logs for kube-controller-manager [4829cb3c03a2] ...
	I0708 13:08:42.051261    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4829cb3c03a2"
	I0708 13:08:42.070173    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:08:42.070183    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:08:42.095278    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:08:42.095286    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:08:42.106778    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:08:42.106787    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:08:42.146311    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:08:42.146321    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:08:42.150741    3932 logs.go:123] Gathering logs for coredns [12a2164c7181] ...
	I0708 13:08:42.150750    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a2164c7181"
	I0708 13:08:42.162743    3932 logs.go:123] Gathering logs for etcd [52eda3d8b3e7] ...
	I0708 13:08:42.162756    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52eda3d8b3e7"
	I0708 13:08:42.180662    3932 logs.go:123] Gathering logs for coredns [77c0e4961f2a] ...
	I0708 13:08:42.180672    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77c0e4961f2a"
	I0708 13:08:42.191730    3932 logs.go:123] Gathering logs for coredns [63e36cf27807] ...
	I0708 13:08:42.191741    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e36cf27807"
	I0708 13:08:42.207451    3932 logs.go:123] Gathering logs for coredns [f585feadba35] ...
	I0708 13:08:42.207460    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f585feadba35"
	I0708 13:08:42.219107    3932 logs.go:123] Gathering logs for kube-apiserver [063efc38d81d] ...
	I0708 13:08:42.219118    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063efc38d81d"
	I0708 13:08:42.239211    3932 logs.go:123] Gathering logs for kube-scheduler [bb65792657e6] ...
	I0708 13:08:42.239221    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65792657e6"
	I0708 13:08:42.259269    3932 logs.go:123] Gathering logs for kube-proxy [814e848a6031] ...
	I0708 13:08:42.259277    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 814e848a6031"
	I0708 13:08:42.271662    3932 logs.go:123] Gathering logs for storage-provisioner [059ae42247ca] ...
	I0708 13:08:42.271674    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059ae42247ca"
	I0708 13:08:44.785537    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:08:46.146377    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:08:46.146815    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:08:46.181474    4087 logs.go:276] 2 containers: [6ea05f4d18cc 7420b58631a6]
	I0708 13:08:46.181604    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:08:46.201288    4087 logs.go:276] 2 containers: [1e89e3203798 9693310828d2]
	I0708 13:08:46.201386    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:08:46.220289    4087 logs.go:276] 1 containers: [98fa118fd098]
	I0708 13:08:46.220367    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:08:46.238154    4087 logs.go:276] 2 containers: [6dbdf148a964 d192ae42697c]
	I0708 13:08:46.238228    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:08:46.248780    4087 logs.go:276] 1 containers: [750b11fad6e2]
	I0708 13:08:46.248855    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:08:46.260639    4087 logs.go:276] 2 containers: [e8da15772873 fb1259fd60c1]
	I0708 13:08:46.260714    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:08:46.273494    4087 logs.go:276] 0 containers: []
	W0708 13:08:46.273506    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:08:46.273570    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:08:46.284953    4087 logs.go:276] 2 containers: [7d824b616b14 514c8e511812]
	I0708 13:08:46.284973    4087 logs.go:123] Gathering logs for kube-controller-manager [fb1259fd60c1] ...
	I0708 13:08:46.284979    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1259fd60c1"
	I0708 13:08:46.298956    4087 logs.go:123] Gathering logs for storage-provisioner [7d824b616b14] ...
	I0708 13:08:46.298968    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d824b616b14"
	I0708 13:08:46.311177    4087 logs.go:123] Gathering logs for storage-provisioner [514c8e511812] ...
	I0708 13:08:46.311190    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514c8e511812"
	I0708 13:08:46.327754    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:08:46.327765    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:08:46.361608    4087 logs.go:123] Gathering logs for kube-apiserver [6ea05f4d18cc] ...
	I0708 13:08:46.361620    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea05f4d18cc"
	I0708 13:08:46.375849    4087 logs.go:123] Gathering logs for kube-apiserver [7420b58631a6] ...
	I0708 13:08:46.375859    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7420b58631a6"
	I0708 13:08:46.407509    4087 logs.go:123] Gathering logs for etcd [9693310828d2] ...
	I0708 13:08:46.407521    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9693310828d2"
	I0708 13:08:46.427403    4087 logs.go:123] Gathering logs for kube-controller-manager [e8da15772873] ...
	I0708 13:08:46.427413    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8da15772873"
	I0708 13:08:46.444987    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:08:46.444997    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:08:46.468867    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:08:46.468875    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:08:46.480792    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:08:46.480804    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:08:46.484833    4087 logs.go:123] Gathering logs for etcd [1e89e3203798] ...
	I0708 13:08:46.484838    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e89e3203798"
	I0708 13:08:46.504085    4087 logs.go:123] Gathering logs for kube-scheduler [6dbdf148a964] ...
	I0708 13:08:46.504098    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbdf148a964"
	I0708 13:08:46.520069    4087 logs.go:123] Gathering logs for kube-scheduler [d192ae42697c] ...
	I0708 13:08:46.520080    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d192ae42697c"
	I0708 13:08:46.535079    4087 logs.go:123] Gathering logs for kube-proxy [750b11fad6e2] ...
	I0708 13:08:46.535090    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750b11fad6e2"
	I0708 13:08:46.547409    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:08:46.547420    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:08:46.586341    4087 logs.go:123] Gathering logs for coredns [98fa118fd098] ...
	I0708 13:08:46.586350    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98fa118fd098"
	I0708 13:08:49.099325    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:08:49.787679    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:08:49.787825    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:08:49.805561    3932 logs.go:276] 1 containers: [063efc38d81d]
	I0708 13:08:49.805647    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:08:49.826859    3932 logs.go:276] 1 containers: [52eda3d8b3e7]
	I0708 13:08:49.826948    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:08:49.839679    3932 logs.go:276] 4 containers: [77c0e4961f2a 63e36cf27807 f585feadba35 12a2164c7181]
	I0708 13:08:49.839749    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:08:49.854056    3932 logs.go:276] 1 containers: [bb65792657e6]
	I0708 13:08:49.854130    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:08:49.864524    3932 logs.go:276] 1 containers: [814e848a6031]
	I0708 13:08:49.864593    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:08:49.874923    3932 logs.go:276] 1 containers: [4829cb3c03a2]
	I0708 13:08:49.874997    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:08:49.885607    3932 logs.go:276] 0 containers: []
	W0708 13:08:49.885619    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:08:49.885675    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:08:49.895891    3932 logs.go:276] 1 containers: [059ae42247ca]
	I0708 13:08:49.895910    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:08:49.895915    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:08:49.900475    3932 logs.go:123] Gathering logs for kube-scheduler [bb65792657e6] ...
	I0708 13:08:49.900486    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65792657e6"
	I0708 13:08:49.915670    3932 logs.go:123] Gathering logs for kube-controller-manager [4829cb3c03a2] ...
	I0708 13:08:49.915682    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4829cb3c03a2"
	I0708 13:08:49.933740    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:08:49.933749    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:08:49.959947    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:08:49.959957    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:08:49.976679    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:08:49.976694    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:08:50.015639    3932 logs.go:123] Gathering logs for coredns [63e36cf27807] ...
	I0708 13:08:50.015650    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e36cf27807"
	I0708 13:08:50.027405    3932 logs.go:123] Gathering logs for coredns [f585feadba35] ...
	I0708 13:08:50.027421    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f585feadba35"
	I0708 13:08:50.039594    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:08:50.039604    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:08:50.076694    3932 logs.go:123] Gathering logs for kube-apiserver [063efc38d81d] ...
	I0708 13:08:50.076702    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063efc38d81d"
	I0708 13:08:50.094919    3932 logs.go:123] Gathering logs for etcd [52eda3d8b3e7] ...
	I0708 13:08:50.094929    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52eda3d8b3e7"
	I0708 13:08:50.108957    3932 logs.go:123] Gathering logs for coredns [12a2164c7181] ...
	I0708 13:08:50.108969    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a2164c7181"
	I0708 13:08:50.120720    3932 logs.go:123] Gathering logs for storage-provisioner [059ae42247ca] ...
	I0708 13:08:50.120730    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059ae42247ca"
	I0708 13:08:50.131942    3932 logs.go:123] Gathering logs for coredns [77c0e4961f2a] ...
	I0708 13:08:50.131951    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77c0e4961f2a"
	I0708 13:08:50.143205    3932 logs.go:123] Gathering logs for kube-proxy [814e848a6031] ...
	I0708 13:08:50.143214    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 814e848a6031"
	I0708 13:08:54.101386    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:08:54.101535    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:08:54.116632    4087 logs.go:276] 2 containers: [6ea05f4d18cc 7420b58631a6]
	I0708 13:08:54.116717    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:08:54.128831    4087 logs.go:276] 2 containers: [1e89e3203798 9693310828d2]
	I0708 13:08:54.128897    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:08:54.139984    4087 logs.go:276] 1 containers: [98fa118fd098]
	I0708 13:08:54.140057    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:08:54.150907    4087 logs.go:276] 2 containers: [6dbdf148a964 d192ae42697c]
	I0708 13:08:54.150979    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:08:54.161796    4087 logs.go:276] 1 containers: [750b11fad6e2]
	I0708 13:08:54.161867    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:08:54.172353    4087 logs.go:276] 2 containers: [e8da15772873 fb1259fd60c1]
	I0708 13:08:54.172422    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:08:54.182819    4087 logs.go:276] 0 containers: []
	W0708 13:08:54.182829    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:08:54.182887    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:08:54.193002    4087 logs.go:276] 2 containers: [7d824b616b14 514c8e511812]
	I0708 13:08:54.193021    4087 logs.go:123] Gathering logs for kube-apiserver [7420b58631a6] ...
	I0708 13:08:54.193026    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7420b58631a6"
	I0708 13:08:54.218524    4087 logs.go:123] Gathering logs for etcd [1e89e3203798] ...
	I0708 13:08:54.218535    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e89e3203798"
	I0708 13:08:54.232123    4087 logs.go:123] Gathering logs for kube-scheduler [6dbdf148a964] ...
	I0708 13:08:54.232134    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbdf148a964"
	I0708 13:08:54.243953    4087 logs.go:123] Gathering logs for storage-provisioner [514c8e511812] ...
	I0708 13:08:54.243966    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514c8e511812"
	I0708 13:08:54.260257    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:08:54.260269    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:08:54.264487    4087 logs.go:123] Gathering logs for kube-apiserver [6ea05f4d18cc] ...
	I0708 13:08:54.264495    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea05f4d18cc"
	I0708 13:08:54.278826    4087 logs.go:123] Gathering logs for storage-provisioner [7d824b616b14] ...
	I0708 13:08:54.278839    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d824b616b14"
	I0708 13:08:54.290204    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:08:54.290214    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:08:54.314471    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:08:54.314481    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:08:54.326149    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:08:54.326159    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:08:54.365759    4087 logs.go:123] Gathering logs for kube-scheduler [d192ae42697c] ...
	I0708 13:08:54.365769    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d192ae42697c"
	I0708 13:08:54.380309    4087 logs.go:123] Gathering logs for kube-controller-manager [fb1259fd60c1] ...
	I0708 13:08:54.380318    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1259fd60c1"
	I0708 13:08:54.394430    4087 logs.go:123] Gathering logs for coredns [98fa118fd098] ...
	I0708 13:08:54.394440    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98fa118fd098"
	I0708 13:08:54.405786    4087 logs.go:123] Gathering logs for kube-controller-manager [e8da15772873] ...
	I0708 13:08:54.405799    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8da15772873"
	I0708 13:08:54.424049    4087 logs.go:123] Gathering logs for kube-proxy [750b11fad6e2] ...
	I0708 13:08:54.424063    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750b11fad6e2"
	I0708 13:08:54.435778    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:08:54.435793    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:08:54.473009    4087 logs.go:123] Gathering logs for etcd [9693310828d2] ...
	I0708 13:08:54.473026    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9693310828d2"
	I0708 13:08:52.656260    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:08:56.990578    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:08:57.658782    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:08:57.658983    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:08:57.680382    3932 logs.go:276] 1 containers: [063efc38d81d]
	I0708 13:08:57.680463    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:08:57.693867    3932 logs.go:276] 1 containers: [52eda3d8b3e7]
	I0708 13:08:57.693937    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:08:57.705354    3932 logs.go:276] 4 containers: [77c0e4961f2a 63e36cf27807 f585feadba35 12a2164c7181]
	I0708 13:08:57.705419    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:08:57.715676    3932 logs.go:276] 1 containers: [bb65792657e6]
	I0708 13:08:57.715739    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:08:57.726251    3932 logs.go:276] 1 containers: [814e848a6031]
	I0708 13:08:57.726321    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:08:57.737169    3932 logs.go:276] 1 containers: [4829cb3c03a2]
	I0708 13:08:57.737237    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:08:57.747072    3932 logs.go:276] 0 containers: []
	W0708 13:08:57.747087    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:08:57.747148    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:08:57.757717    3932 logs.go:276] 1 containers: [059ae42247ca]
	I0708 13:08:57.757733    3932 logs.go:123] Gathering logs for coredns [77c0e4961f2a] ...
	I0708 13:08:57.757739    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77c0e4961f2a"
	I0708 13:08:57.770167    3932 logs.go:123] Gathering logs for kube-proxy [814e848a6031] ...
	I0708 13:08:57.770177    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 814e848a6031"
	I0708 13:08:57.782883    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:08:57.782892    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:08:57.824967    3932 logs.go:123] Gathering logs for etcd [52eda3d8b3e7] ...
	I0708 13:08:57.824979    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52eda3d8b3e7"
	I0708 13:08:57.844619    3932 logs.go:123] Gathering logs for coredns [f585feadba35] ...
	I0708 13:08:57.844626    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f585feadba35"
	I0708 13:08:57.862046    3932 logs.go:123] Gathering logs for kube-controller-manager [4829cb3c03a2] ...
	I0708 13:08:57.862058    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4829cb3c03a2"
	I0708 13:08:57.891637    3932 logs.go:123] Gathering logs for storage-provisioner [059ae42247ca] ...
	I0708 13:08:57.891649    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059ae42247ca"
	I0708 13:08:57.908125    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:08:57.908138    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:08:57.947206    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:08:57.947224    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:08:57.951831    3932 logs.go:123] Gathering logs for coredns [12a2164c7181] ...
	I0708 13:08:57.951840    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a2164c7181"
	I0708 13:08:57.963837    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:08:57.963846    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:08:57.988243    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:08:57.988254    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:08:57.999700    3932 logs.go:123] Gathering logs for kube-apiserver [063efc38d81d] ...
	I0708 13:08:57.999711    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063efc38d81d"
	I0708 13:08:58.014028    3932 logs.go:123] Gathering logs for coredns [63e36cf27807] ...
	I0708 13:08:58.014042    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e36cf27807"
	I0708 13:08:58.025833    3932 logs.go:123] Gathering logs for kube-scheduler [bb65792657e6] ...
	I0708 13:08:58.025847    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65792657e6"
	I0708 13:09:00.542187    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:09:01.992566    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:09:01.992614    4087 kubeadm.go:591] duration metric: took 4m3.616013542s to restartPrimaryControlPlane
	W0708 13:09:01.992650    4087 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0708 13:09:01.992666    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0708 13:09:02.985409    4087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 13:09:02.990365    4087 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 13:09:02.993258    4087 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 13:09:02.995968    4087 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 13:09:02.995975    4087 kubeadm.go:156] found existing configuration files:
	
	I0708 13:09:02.995993    4087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50600 /etc/kubernetes/admin.conf
	I0708 13:09:02.998400    4087 kubeadm.go:162] "https://control-plane.minikube.internal:50600" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50600 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 13:09:02.998420    4087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 13:09:03.000919    4087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50600 /etc/kubernetes/kubelet.conf
	I0708 13:09:03.003675    4087 kubeadm.go:162] "https://control-plane.minikube.internal:50600" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50600 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 13:09:03.003701    4087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 13:09:03.006266    4087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50600 /etc/kubernetes/controller-manager.conf
	I0708 13:09:03.008963    4087 kubeadm.go:162] "https://control-plane.minikube.internal:50600" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50600 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 13:09:03.008989    4087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 13:09:03.011983    4087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50600 /etc/kubernetes/scheduler.conf
	I0708 13:09:03.014520    4087 kubeadm.go:162] "https://control-plane.minikube.internal:50600" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50600 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 13:09:03.014543    4087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 13:09:03.017254    4087 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0708 13:09:03.035970    4087 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0708 13:09:03.036015    4087 kubeadm.go:309] [preflight] Running pre-flight checks
	I0708 13:09:03.084301    4087 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0708 13:09:03.084403    4087 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0708 13:09:03.084456    4087 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0708 13:09:03.132275    4087 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0708 13:09:03.139478    4087 out.go:204]   - Generating certificates and keys ...
	I0708 13:09:03.139513    4087 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0708 13:09:03.139551    4087 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0708 13:09:03.139600    4087 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0708 13:09:03.139641    4087 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0708 13:09:03.139678    4087 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0708 13:09:03.139706    4087 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0708 13:09:03.139740    4087 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0708 13:09:03.139777    4087 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0708 13:09:03.139817    4087 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0708 13:09:03.139866    4087 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0708 13:09:03.139886    4087 kubeadm.go:309] [certs] Using the existing "sa" key
	I0708 13:09:03.139917    4087 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0708 13:09:03.178717    4087 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0708 13:09:03.348723    4087 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0708 13:09:03.473750    4087 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0708 13:09:03.573249    4087 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0708 13:09:03.607213    4087 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0708 13:09:03.607631    4087 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0708 13:09:03.607743    4087 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0708 13:09:03.713892    4087 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0708 13:09:03.718119    4087 out.go:204]   - Booting up control plane ...
	I0708 13:09:03.718233    4087 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0708 13:09:03.718273    4087 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0708 13:09:03.718310    4087 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0708 13:09:03.718351    4087 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0708 13:09:03.718471    4087 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0708 13:09:05.544532    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:09:05.544624    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:09:05.555957    3932 logs.go:276] 1 containers: [063efc38d81d]
	I0708 13:09:05.556040    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:09:05.567003    3932 logs.go:276] 1 containers: [52eda3d8b3e7]
	I0708 13:09:05.567086    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:09:05.578178    3932 logs.go:276] 4 containers: [77c0e4961f2a 63e36cf27807 f585feadba35 12a2164c7181]
	I0708 13:09:05.578261    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:09:05.589552    3932 logs.go:276] 1 containers: [bb65792657e6]
	I0708 13:09:05.589622    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:09:05.605010    3932 logs.go:276] 1 containers: [814e848a6031]
	I0708 13:09:05.605087    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:09:05.615992    3932 logs.go:276] 1 containers: [4829cb3c03a2]
	I0708 13:09:05.616065    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:09:05.627418    3932 logs.go:276] 0 containers: []
	W0708 13:09:05.627429    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:09:05.627494    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:09:05.638528    3932 logs.go:276] 1 containers: [059ae42247ca]
	I0708 13:09:05.638547    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:09:05.638552    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:09:05.678632    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:09:05.678642    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:09:05.683253    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:09:05.683262    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:09:05.717562    3932 logs.go:123] Gathering logs for kube-apiserver [063efc38d81d] ...
	I0708 13:09:05.717575    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063efc38d81d"
	I0708 13:09:05.731785    3932 logs.go:123] Gathering logs for etcd [52eda3d8b3e7] ...
	I0708 13:09:05.731797    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52eda3d8b3e7"
	I0708 13:09:05.749315    3932 logs.go:123] Gathering logs for kube-scheduler [bb65792657e6] ...
	I0708 13:09:05.749327    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65792657e6"
	I0708 13:09:05.764050    3932 logs.go:123] Gathering logs for coredns [77c0e4961f2a] ...
	I0708 13:09:05.764064    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77c0e4961f2a"
	I0708 13:09:05.779096    3932 logs.go:123] Gathering logs for kube-controller-manager [4829cb3c03a2] ...
	I0708 13:09:05.779108    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4829cb3c03a2"
	I0708 13:09:05.797988    3932 logs.go:123] Gathering logs for storage-provisioner [059ae42247ca] ...
	I0708 13:09:05.798003    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059ae42247ca"
	I0708 13:09:05.810553    3932 logs.go:123] Gathering logs for coredns [63e36cf27807] ...
	I0708 13:09:05.810567    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e36cf27807"
	I0708 13:09:05.827456    3932 logs.go:123] Gathering logs for kube-proxy [814e848a6031] ...
	I0708 13:09:05.827468    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 814e848a6031"
	I0708 13:09:05.839671    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:09:05.839683    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:09:05.866258    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:09:05.866280    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:09:05.879122    3932 logs.go:123] Gathering logs for coredns [f585feadba35] ...
	I0708 13:09:05.879136    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f585feadba35"
	I0708 13:09:05.892818    3932 logs.go:123] Gathering logs for coredns [12a2164c7181] ...
	I0708 13:09:05.892830    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a2164c7181"
	I0708 13:09:08.218287    4087 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.502492 seconds
	I0708 13:09:08.218352    4087 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0708 13:09:08.221925    4087 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0708 13:09:08.734100    4087 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0708 13:09:08.737052    4087 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-170000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0708 13:09:09.240630    4087 kubeadm.go:309] [bootstrap-token] Using token: v9t5ul.rbt3mp7d4hs387ln
	I0708 13:09:09.247168    4087 out.go:204]   - Configuring RBAC rules ...
	I0708 13:09:09.247225    4087 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0708 13:09:09.247276    4087 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0708 13:09:09.249352    4087 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0708 13:09:09.250640    4087 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0708 13:09:09.251535    4087 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0708 13:09:09.252482    4087 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0708 13:09:09.255707    4087 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0708 13:09:09.454975    4087 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0708 13:09:09.644259    4087 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0708 13:09:09.644789    4087 kubeadm.go:309] 
	I0708 13:09:09.644824    4087 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0708 13:09:09.644851    4087 kubeadm.go:309] 
	I0708 13:09:09.644911    4087 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0708 13:09:09.644915    4087 kubeadm.go:309] 
	I0708 13:09:09.644931    4087 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0708 13:09:09.644959    4087 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0708 13:09:09.644992    4087 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0708 13:09:09.644999    4087 kubeadm.go:309] 
	I0708 13:09:09.645024    4087 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0708 13:09:09.645027    4087 kubeadm.go:309] 
	I0708 13:09:09.645047    4087 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0708 13:09:09.645049    4087 kubeadm.go:309] 
	I0708 13:09:09.645074    4087 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0708 13:09:09.645109    4087 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0708 13:09:09.645157    4087 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0708 13:09:09.645163    4087 kubeadm.go:309] 
	I0708 13:09:09.645203    4087 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0708 13:09:09.645243    4087 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0708 13:09:09.645247    4087 kubeadm.go:309] 
	I0708 13:09:09.645286    4087 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token v9t5ul.rbt3mp7d4hs387ln \
	I0708 13:09:09.645332    4087 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:230a71526e00c18db9a0775e630de2fb59560bfeed9e976d05ee095d6c2f986e \
	I0708 13:09:09.645341    4087 kubeadm.go:309] 	--control-plane 
	I0708 13:09:09.645345    4087 kubeadm.go:309] 
	I0708 13:09:09.645407    4087 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0708 13:09:09.645414    4087 kubeadm.go:309] 
	I0708 13:09:09.645465    4087 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token v9t5ul.rbt3mp7d4hs387ln \
	I0708 13:09:09.645517    4087 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:230a71526e00c18db9a0775e630de2fb59560bfeed9e976d05ee095d6c2f986e 
	I0708 13:09:09.646121    4087 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0708 13:09:09.646218    4087 cni.go:84] Creating CNI manager for ""
	I0708 13:09:09.646227    4087 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0708 13:09:09.650030    4087 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0708 13:09:09.657996    4087 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0708 13:09:09.661161    4087 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0708 13:09:09.666028    4087 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0708 13:09:09.666066    4087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 13:09:09.666154    4087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-170000 minikube.k8s.io/updated_at=2024_07_08T13_09_09_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad minikube.k8s.io/name=stopped-upgrade-170000 minikube.k8s.io/primary=true
	I0708 13:09:09.709099    4087 kubeadm.go:1107] duration metric: took 43.064959ms to wait for elevateKubeSystemPrivileges
	I0708 13:09:09.709118    4087 ops.go:34] apiserver oom_adj: -16
	W0708 13:09:09.709142    4087 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0708 13:09:09.709149    4087 kubeadm.go:393] duration metric: took 4m11.345810833s to StartCluster
	I0708 13:09:09.709157    4087 settings.go:142] acquiring lock: {Name:mka0c397a57d617e1d77508d22cc3adb2edf5927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 13:09:09.709248    4087 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 13:09:09.709647    4087 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/kubeconfig: {Name:mkd06393ca6fb9ad91b614216d70dbd8a552e45d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 13:09:09.709868    4087 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 13:09:09.709892    4087 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0708 13:09:09.709964    4087 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-170000"
	I0708 13:09:09.709974    4087 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-170000"
	W0708 13:09:09.709977    4087 addons.go:243] addon storage-provisioner should already be in state true
	I0708 13:09:09.709988    4087 host.go:66] Checking if "stopped-upgrade-170000" exists ...
	I0708 13:09:09.709989    4087 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-170000"
	I0708 13:09:09.710003    4087 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-170000"
	I0708 13:09:09.710069    4087 config.go:182] Loaded profile config "stopped-upgrade-170000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0708 13:09:09.714019    4087 out.go:177] * Verifying Kubernetes components...
	I0708 13:09:09.714658    4087 kapi.go:59] client config for stopped-upgrade-170000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/stopped-upgrade-170000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/stopped-upgrade-170000/client.key", CAFile:"/Users/jenkins/minikube-integration/19195-1270/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10599f4f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0708 13:09:09.717295    4087 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-170000"
	W0708 13:09:09.717300    4087 addons.go:243] addon default-storageclass should already be in state true
	I0708 13:09:09.717308    4087 host.go:66] Checking if "stopped-upgrade-170000" exists ...
	I0708 13:09:09.717829    4087 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0708 13:09:09.717834    4087 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0708 13:09:09.717843    4087 sshutil.go:53] new ssh client: &{IP:localhost Port:50565 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/stopped-upgrade-170000/id_rsa Username:docker}
	I0708 13:09:09.720999    4087 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 13:09:08.407273    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:09:09.724902    4087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 13:09:09.728999    4087 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 13:09:09.729010    4087 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0708 13:09:09.729020    4087 sshutil.go:53] new ssh client: &{IP:localhost Port:50565 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/stopped-upgrade-170000/id_rsa Username:docker}
	I0708 13:09:09.817715    4087 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 13:09:09.823350    4087 api_server.go:52] waiting for apiserver process to appear ...
	I0708 13:09:09.823400    4087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 13:09:09.827495    4087 api_server.go:72] duration metric: took 117.619ms to wait for apiserver process to appear ...
	I0708 13:09:09.827504    4087 api_server.go:88] waiting for apiserver healthz status ...
	I0708 13:09:09.827511    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:09:09.839021    4087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 13:09:09.898095    4087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0708 13:09:13.407857    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:09:13.408061    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:09:13.427113    3932 logs.go:276] 1 containers: [063efc38d81d]
	I0708 13:09:13.427204    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:09:13.441206    3932 logs.go:276] 1 containers: [52eda3d8b3e7]
	I0708 13:09:13.441279    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:09:13.454846    3932 logs.go:276] 4 containers: [77c0e4961f2a 63e36cf27807 f585feadba35 12a2164c7181]
	I0708 13:09:13.454921    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:09:13.465920    3932 logs.go:276] 1 containers: [bb65792657e6]
	I0708 13:09:13.465980    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:09:13.476438    3932 logs.go:276] 1 containers: [814e848a6031]
	I0708 13:09:13.476504    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:09:13.487479    3932 logs.go:276] 1 containers: [4829cb3c03a2]
	I0708 13:09:13.487546    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:09:13.498670    3932 logs.go:276] 0 containers: []
	W0708 13:09:13.498680    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:09:13.498732    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:09:13.509468    3932 logs.go:276] 1 containers: [059ae42247ca]
	I0708 13:09:13.509485    3932 logs.go:123] Gathering logs for kube-scheduler [bb65792657e6] ...
	I0708 13:09:13.509490    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65792657e6"
	I0708 13:09:13.524616    3932 logs.go:123] Gathering logs for kube-controller-manager [4829cb3c03a2] ...
	I0708 13:09:13.524624    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4829cb3c03a2"
	I0708 13:09:13.543213    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:09:13.543223    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:09:13.583686    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:09:13.583699    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:09:13.588736    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:09:13.588745    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:09:13.624110    3932 logs.go:123] Gathering logs for kube-apiserver [063efc38d81d] ...
	I0708 13:09:13.624124    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063efc38d81d"
	I0708 13:09:13.638681    3932 logs.go:123] Gathering logs for coredns [12a2164c7181] ...
	I0708 13:09:13.638692    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a2164c7181"
	I0708 13:09:13.653463    3932 logs.go:123] Gathering logs for coredns [63e36cf27807] ...
	I0708 13:09:13.653474    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e36cf27807"
	I0708 13:09:13.665490    3932 logs.go:123] Gathering logs for coredns [f585feadba35] ...
	I0708 13:09:13.665501    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f585feadba35"
	I0708 13:09:13.677313    3932 logs.go:123] Gathering logs for storage-provisioner [059ae42247ca] ...
	I0708 13:09:13.677324    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059ae42247ca"
	I0708 13:09:13.688616    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:09:13.688627    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:09:13.712393    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:09:13.712401    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:09:13.723607    3932 logs.go:123] Gathering logs for etcd [52eda3d8b3e7] ...
	I0708 13:09:13.723617    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52eda3d8b3e7"
	I0708 13:09:13.737619    3932 logs.go:123] Gathering logs for coredns [77c0e4961f2a] ...
	I0708 13:09:13.737629    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77c0e4961f2a"
	I0708 13:09:13.749272    3932 logs.go:123] Gathering logs for kube-proxy [814e848a6031] ...
	I0708 13:09:13.749282    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 814e848a6031"
	I0708 13:09:14.827803    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:09:14.827841    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:09:16.269385    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:09:19.829327    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:09:19.829362    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:09:21.271611    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:09:21.271725    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:09:21.283748    3932 logs.go:276] 1 containers: [063efc38d81d]
	I0708 13:09:21.283823    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:09:21.294798    3932 logs.go:276] 1 containers: [52eda3d8b3e7]
	I0708 13:09:21.294866    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:09:21.305454    3932 logs.go:276] 4 containers: [77c0e4961f2a 63e36cf27807 f585feadba35 12a2164c7181]
	I0708 13:09:21.305521    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:09:21.315946    3932 logs.go:276] 1 containers: [bb65792657e6]
	I0708 13:09:21.316015    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:09:21.326798    3932 logs.go:276] 1 containers: [814e848a6031]
	I0708 13:09:21.326862    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:09:21.337473    3932 logs.go:276] 1 containers: [4829cb3c03a2]
	I0708 13:09:21.337534    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:09:21.348496    3932 logs.go:276] 0 containers: []
	W0708 13:09:21.348507    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:09:21.348562    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:09:21.363022    3932 logs.go:276] 1 containers: [059ae42247ca]
	I0708 13:09:21.363040    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:09:21.363045    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:09:21.389616    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:09:21.389631    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:09:21.429205    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:09:21.429217    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:09:21.434271    3932 logs.go:123] Gathering logs for coredns [f585feadba35] ...
	I0708 13:09:21.434278    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f585feadba35"
	I0708 13:09:21.446571    3932 logs.go:123] Gathering logs for kube-scheduler [bb65792657e6] ...
	I0708 13:09:21.446581    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65792657e6"
	I0708 13:09:21.461095    3932 logs.go:123] Gathering logs for storage-provisioner [059ae42247ca] ...
	I0708 13:09:21.461106    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059ae42247ca"
	I0708 13:09:21.473203    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:09:21.473213    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:09:21.485455    3932 logs.go:123] Gathering logs for coredns [12a2164c7181] ...
	I0708 13:09:21.485469    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a2164c7181"
	I0708 13:09:21.497546    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:09:21.497557    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:09:21.536979    3932 logs.go:123] Gathering logs for etcd [52eda3d8b3e7] ...
	I0708 13:09:21.536991    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52eda3d8b3e7"
	I0708 13:09:21.555342    3932 logs.go:123] Gathering logs for coredns [77c0e4961f2a] ...
	I0708 13:09:21.555353    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77c0e4961f2a"
	I0708 13:09:21.567526    3932 logs.go:123] Gathering logs for coredns [63e36cf27807] ...
	I0708 13:09:21.567537    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e36cf27807"
	I0708 13:09:21.580355    3932 logs.go:123] Gathering logs for kube-apiserver [063efc38d81d] ...
	I0708 13:09:21.580366    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063efc38d81d"
	I0708 13:09:21.596619    3932 logs.go:123] Gathering logs for kube-proxy [814e848a6031] ...
	I0708 13:09:21.596630    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 814e848a6031"
	I0708 13:09:21.609753    3932 logs.go:123] Gathering logs for kube-controller-manager [4829cb3c03a2] ...
	I0708 13:09:21.609764    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4829cb3c03a2"
	I0708 13:09:24.138744    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:09:24.829414    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:09:24.829436    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:09:29.140813    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:09:29.141053    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:09:29.157337    3932 logs.go:276] 1 containers: [063efc38d81d]
	I0708 13:09:29.157419    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:09:29.170139    3932 logs.go:276] 1 containers: [52eda3d8b3e7]
	I0708 13:09:29.170203    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:09:29.181076    3932 logs.go:276] 4 containers: [77c0e4961f2a 63e36cf27807 f585feadba35 12a2164c7181]
	I0708 13:09:29.181148    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:09:29.192192    3932 logs.go:276] 1 containers: [bb65792657e6]
	I0708 13:09:29.192250    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:09:29.202576    3932 logs.go:276] 1 containers: [814e848a6031]
	I0708 13:09:29.202643    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:09:29.212865    3932 logs.go:276] 1 containers: [4829cb3c03a2]
	I0708 13:09:29.212930    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:09:29.223989    3932 logs.go:276] 0 containers: []
	W0708 13:09:29.224002    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:09:29.224062    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:09:29.238451    3932 logs.go:276] 1 containers: [059ae42247ca]
	I0708 13:09:29.238468    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:09:29.238474    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:09:29.277355    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:09:29.277363    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:09:29.282460    3932 logs.go:123] Gathering logs for kube-apiserver [063efc38d81d] ...
	I0708 13:09:29.282470    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063efc38d81d"
	I0708 13:09:29.296705    3932 logs.go:123] Gathering logs for coredns [77c0e4961f2a] ...
	I0708 13:09:29.296714    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77c0e4961f2a"
	I0708 13:09:29.310296    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:09:29.310306    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:09:29.334626    3932 logs.go:123] Gathering logs for coredns [63e36cf27807] ...
	I0708 13:09:29.334633    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e36cf27807"
	I0708 13:09:29.345720    3932 logs.go:123] Gathering logs for kube-scheduler [bb65792657e6] ...
	I0708 13:09:29.345730    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65792657e6"
	I0708 13:09:29.364232    3932 logs.go:123] Gathering logs for kube-proxy [814e848a6031] ...
	I0708 13:09:29.364242    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 814e848a6031"
	I0708 13:09:29.376040    3932 logs.go:123] Gathering logs for kube-controller-manager [4829cb3c03a2] ...
	I0708 13:09:29.376055    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4829cb3c03a2"
	I0708 13:09:29.394130    3932 logs.go:123] Gathering logs for coredns [f585feadba35] ...
	I0708 13:09:29.394145    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f585feadba35"
	I0708 13:09:29.411068    3932 logs.go:123] Gathering logs for storage-provisioner [059ae42247ca] ...
	I0708 13:09:29.411079    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059ae42247ca"
	I0708 13:09:29.422508    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:09:29.422519    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:09:29.434942    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:09:29.434953    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:09:29.470003    3932 logs.go:123] Gathering logs for etcd [52eda3d8b3e7] ...
	I0708 13:09:29.470015    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52eda3d8b3e7"
	I0708 13:09:29.483983    3932 logs.go:123] Gathering logs for coredns [12a2164c7181] ...
	I0708 13:09:29.483995    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a2164c7181"
	I0708 13:09:29.829527    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:09:29.829555    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:09:31.997359    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:09:34.829778    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:09:34.829833    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:09:39.830392    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:09:39.830445    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0708 13:09:40.190448    4087 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0708 13:09:40.194423    4087 out.go:177] * Enabled addons: storage-provisioner
	I0708 13:09:36.999526    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:09:36.999692    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:09:37.013362    3932 logs.go:276] 1 containers: [063efc38d81d]
	I0708 13:09:37.013436    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:09:37.024077    3932 logs.go:276] 1 containers: [52eda3d8b3e7]
	I0708 13:09:37.024149    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:09:37.034678    3932 logs.go:276] 4 containers: [77c0e4961f2a 63e36cf27807 f585feadba35 12a2164c7181]
	I0708 13:09:37.034751    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:09:37.045629    3932 logs.go:276] 1 containers: [bb65792657e6]
	I0708 13:09:37.045697    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:09:37.057138    3932 logs.go:276] 1 containers: [814e848a6031]
	I0708 13:09:37.057205    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:09:37.072442    3932 logs.go:276] 1 containers: [4829cb3c03a2]
	I0708 13:09:37.072510    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:09:37.083529    3932 logs.go:276] 0 containers: []
	W0708 13:09:37.083545    3932 logs.go:278] No container was found matching "kindnet"
	I0708 13:09:37.083607    3932 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:09:37.095260    3932 logs.go:276] 1 containers: [059ae42247ca]
	I0708 13:09:37.095278    3932 logs.go:123] Gathering logs for dmesg ...
	I0708 13:09:37.095284    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:09:37.100150    3932 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:09:37.100160    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:09:37.136704    3932 logs.go:123] Gathering logs for etcd [52eda3d8b3e7] ...
	I0708 13:09:37.136717    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52eda3d8b3e7"
	I0708 13:09:37.151626    3932 logs.go:123] Gathering logs for kube-scheduler [bb65792657e6] ...
	I0708 13:09:37.151639    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb65792657e6"
	I0708 13:09:37.166190    3932 logs.go:123] Gathering logs for kube-controller-manager [4829cb3c03a2] ...
	I0708 13:09:37.166202    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4829cb3c03a2"
	I0708 13:09:37.184348    3932 logs.go:123] Gathering logs for kubelet ...
	I0708 13:09:37.184363    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:09:37.222063    3932 logs.go:123] Gathering logs for coredns [63e36cf27807] ...
	I0708 13:09:37.222072    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e36cf27807"
	I0708 13:09:37.234549    3932 logs.go:123] Gathering logs for coredns [12a2164c7181] ...
	I0708 13:09:37.234557    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a2164c7181"
	I0708 13:09:37.246258    3932 logs.go:123] Gathering logs for storage-provisioner [059ae42247ca] ...
	I0708 13:09:37.246269    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 059ae42247ca"
	I0708 13:09:37.257792    3932 logs.go:123] Gathering logs for kube-apiserver [063efc38d81d] ...
	I0708 13:09:37.257802    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 063efc38d81d"
	I0708 13:09:37.272291    3932 logs.go:123] Gathering logs for container status ...
	I0708 13:09:37.272303    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:09:37.284369    3932 logs.go:123] Gathering logs for coredns [77c0e4961f2a] ...
	I0708 13:09:37.284381    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77c0e4961f2a"
	I0708 13:09:37.295778    3932 logs.go:123] Gathering logs for coredns [f585feadba35] ...
	I0708 13:09:37.295787    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f585feadba35"
	I0708 13:09:37.307710    3932 logs.go:123] Gathering logs for kube-proxy [814e848a6031] ...
	I0708 13:09:37.307720    3932 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 814e848a6031"
	I0708 13:09:37.319091    3932 logs.go:123] Gathering logs for Docker ...
	I0708 13:09:37.319100    3932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:09:39.845024    3932 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:09:40.206586    4087 addons.go:510] duration metric: took 30.497746542s for enable addons: enabled=[storage-provisioner]
	I0708 13:09:44.846993    3932 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:09:44.850389    3932 out.go:177] 
	W0708 13:09:44.854378    3932 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0708 13:09:44.854385    3932 out.go:239] * 
	W0708 13:09:44.855008    3932 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 13:09:44.870212    3932 out.go:177] 
	I0708 13:09:44.831009    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:09:44.831056    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:09:49.831828    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:09:49.831859    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:09:54.831951    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:09:54.831993    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-07-08 20:00:43 UTC, ends at Mon 2024-07-08 20:10:01 UTC. --
	Jul 08 20:09:46 running-upgrade-129000 dockerd[3220]: time="2024-07-08T20:09:46.289927839Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 08 20:09:46 running-upgrade-129000 dockerd[3220]: time="2024-07-08T20:09:46.289955463Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 08 20:09:46 running-upgrade-129000 dockerd[3220]: time="2024-07-08T20:09:46.289964046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 08 20:09:46 running-upgrade-129000 dockerd[3220]: time="2024-07-08T20:09:46.290132955Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/1bfcef7d7132a2e2c32860848049372d0fe2b34bc09af6a89c328ae5707340e5 pid=19168 runtime=io.containerd.runc.v2
	Jul 08 20:09:46 running-upgrade-129000 cri-dockerd[3063]: time="2024-07-08T20:09:46Z" level=error msg="ContainerStats resp: {0x4000686240 linux}"
	Jul 08 20:09:46 running-upgrade-129000 cri-dockerd[3063]: time="2024-07-08T20:09:46Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 08 20:09:47 running-upgrade-129000 cri-dockerd[3063]: time="2024-07-08T20:09:47Z" level=error msg="ContainerStats resp: {0x400093d5c0 linux}"
	Jul 08 20:09:47 running-upgrade-129000 cri-dockerd[3063]: time="2024-07-08T20:09:47Z" level=error msg="ContainerStats resp: {0x400093d700 linux}"
	Jul 08 20:09:47 running-upgrade-129000 cri-dockerd[3063]: time="2024-07-08T20:09:47Z" level=error msg="ContainerStats resp: {0x400093d840 linux}"
	Jul 08 20:09:47 running-upgrade-129000 cri-dockerd[3063]: time="2024-07-08T20:09:47Z" level=error msg="ContainerStats resp: {0x4000988800 linux}"
	Jul 08 20:09:47 running-upgrade-129000 cri-dockerd[3063]: time="2024-07-08T20:09:47Z" level=error msg="ContainerStats resp: {0x4000988d40 linux}"
	Jul 08 20:09:47 running-upgrade-129000 cri-dockerd[3063]: time="2024-07-08T20:09:47Z" level=error msg="ContainerStats resp: {0x40008e4740 linux}"
	Jul 08 20:09:47 running-upgrade-129000 cri-dockerd[3063]: time="2024-07-08T20:09:47Z" level=error msg="ContainerStats resp: {0x40008e4b80 linux}"
	Jul 08 20:09:51 running-upgrade-129000 cri-dockerd[3063]: time="2024-07-08T20:09:51Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 08 20:09:56 running-upgrade-129000 cri-dockerd[3063]: time="2024-07-08T20:09:56Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 08 20:09:57 running-upgrade-129000 cri-dockerd[3063]: time="2024-07-08T20:09:57Z" level=error msg="ContainerStats resp: {0x4000687200 linux}"
	Jul 08 20:09:57 running-upgrade-129000 cri-dockerd[3063]: time="2024-07-08T20:09:57Z" level=error msg="ContainerStats resp: {0x4000890b40 linux}"
	Jul 08 20:09:58 running-upgrade-129000 cri-dockerd[3063]: time="2024-07-08T20:09:58Z" level=error msg="ContainerStats resp: {0x4000988ac0 linux}"
	Jul 08 20:09:59 running-upgrade-129000 cri-dockerd[3063]: time="2024-07-08T20:09:59Z" level=error msg="ContainerStats resp: {0x4000989980 linux}"
	Jul 08 20:09:59 running-upgrade-129000 cri-dockerd[3063]: time="2024-07-08T20:09:59Z" level=error msg="ContainerStats resp: {0x4000989b40 linux}"
	Jul 08 20:09:59 running-upgrade-129000 cri-dockerd[3063]: time="2024-07-08T20:09:59Z" level=error msg="ContainerStats resp: {0x4000872040 linux}"
	Jul 08 20:09:59 running-upgrade-129000 cri-dockerd[3063]: time="2024-07-08T20:09:59Z" level=error msg="ContainerStats resp: {0x4000872600 linux}"
	Jul 08 20:09:59 running-upgrade-129000 cri-dockerd[3063]: time="2024-07-08T20:09:59Z" level=error msg="ContainerStats resp: {0x4000872c40 linux}"
	Jul 08 20:09:59 running-upgrade-129000 cri-dockerd[3063]: time="2024-07-08T20:09:59Z" level=error msg="ContainerStats resp: {0x4000872ec0 linux}"
	Jul 08 20:09:59 running-upgrade-129000 cri-dockerd[3063]: time="2024-07-08T20:09:59Z" level=error msg="ContainerStats resp: {0x4000359680 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	1bfcef7d7132a       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   31265c317d23a
	3573639c9f6d9       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   04c3c3440bb56
	77c0e4961f2a9       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   31265c317d23a
	63e36cf27807f       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   04c3c3440bb56
	814e848a6031c       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   e357d46dc88ac
	059ae42247cad       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   9f942f4b90e8a
	52eda3d8b3e71       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   a03430c0e8b46
	bb65792657e6b       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   a9210bfea1ee0
	4829cb3c03a27       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   70f76478b6974
	063efc38d81d8       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   3076ef76b49ee
	
	
	==> coredns [1bfcef7d7132] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 9002218045349264718.1321489696590579249. HINFO: read udp 10.244.0.3:40077->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9002218045349264718.1321489696590579249. HINFO: read udp 10.244.0.3:56475->10.0.2.3:53: i/o timeout
	
	
	==> coredns [3573639c9f6d] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8306315200989065838.981344821148385219. HINFO: read udp 10.244.0.2:57503->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8306315200989065838.981344821148385219. HINFO: read udp 10.244.0.2:48861->10.0.2.3:53: i/o timeout
	
	
	==> coredns [63e36cf27807] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8886783032847751611.886163702335532363. HINFO: read udp 10.244.0.2:55313->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8886783032847751611.886163702335532363. HINFO: read udp 10.244.0.2:45421->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8886783032847751611.886163702335532363. HINFO: read udp 10.244.0.2:35604->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8886783032847751611.886163702335532363. HINFO: read udp 10.244.0.2:41926->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8886783032847751611.886163702335532363. HINFO: read udp 10.244.0.2:60282->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8886783032847751611.886163702335532363. HINFO: read udp 10.244.0.2:60791->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8886783032847751611.886163702335532363. HINFO: read udp 10.244.0.2:47668->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8886783032847751611.886163702335532363. HINFO: read udp 10.244.0.2:35183->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8886783032847751611.886163702335532363. HINFO: read udp 10.244.0.2:51960->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8886783032847751611.886163702335532363. HINFO: read udp 10.244.0.2:43467->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [77c0e4961f2a] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 862081946363540603.287683598088523562. HINFO: read udp 10.244.0.3:57112->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 862081946363540603.287683598088523562. HINFO: read udp 10.244.0.3:43095->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 862081946363540603.287683598088523562. HINFO: read udp 10.244.0.3:49613->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 862081946363540603.287683598088523562. HINFO: read udp 10.244.0.3:49470->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 862081946363540603.287683598088523562. HINFO: read udp 10.244.0.3:39057->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 862081946363540603.287683598088523562. HINFO: read udp 10.244.0.3:48672->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 862081946363540603.287683598088523562. HINFO: read udp 10.244.0.3:40595->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 862081946363540603.287683598088523562. HINFO: read udp 10.244.0.3:58283->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 862081946363540603.287683598088523562. HINFO: read udp 10.244.0.3:55346->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 862081946363540603.287683598088523562. HINFO: read udp 10.244.0.3:60150->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-129000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-129000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad
	                    minikube.k8s.io/name=running-upgrade-129000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_08T13_05_43_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jul 2024 20:05:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-129000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jul 2024 20:09:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jul 2024 20:05:43 +0000   Mon, 08 Jul 2024 20:05:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jul 2024 20:05:43 +0000   Mon, 08 Jul 2024 20:05:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jul 2024 20:05:43 +0000   Mon, 08 Jul 2024 20:05:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jul 2024 20:05:43 +0000   Mon, 08 Jul 2024 20:05:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-129000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 720db8a491a1440abeaad4e56384e274
	  System UUID:                720db8a491a1440abeaad4e56384e274
	  Boot ID:                    4ae37984-0ad3-4d50-bdfb-345e79ea188a
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-7cpp6                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m3s
	  kube-system                 coredns-6d4b75cb6d-b9f2k                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m3s
	  kube-system                 etcd-running-upgrade-129000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m19s
	  kube-system                 kube-apiserver-running-upgrade-129000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	  kube-system                 kube-controller-manager-running-upgrade-129000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-proxy-bjttm                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-129000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m2s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m23s (x5 over 4m23s)  kubelet          Node running-upgrade-129000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m23s (x5 over 4m23s)  kubelet          Node running-upgrade-129000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m23s (x3 over 4m23s)  kubelet          Node running-upgrade-129000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m18s                  kubelet          Node running-upgrade-129000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  4m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    4m18s                  kubelet          Node running-upgrade-129000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m18s                  kubelet          Node running-upgrade-129000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m18s                  kubelet          Node running-upgrade-129000 status is now: NodeReady
	  Normal  Starting                 4m18s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m4s                   node-controller  Node running-upgrade-129000 event: Registered Node running-upgrade-129000 in Controller
	
	
	==> dmesg <==
	[  +2.203239] systemd-fstab-generator[878]: Ignoring "noauto" for root device
	[  +0.059633] systemd-fstab-generator[889]: Ignoring "noauto" for root device
	[  +0.061266] systemd-fstab-generator[900]: Ignoring "noauto" for root device
	[  +1.144693] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.068042] systemd-fstab-generator[1049]: Ignoring "noauto" for root device
	[  +0.062568] systemd-fstab-generator[1060]: Ignoring "noauto" for root device
	[Jul 8 20:01] systemd-fstab-generator[1288]: Ignoring "noauto" for root device
	[  +9.145124] systemd-fstab-generator[1927]: Ignoring "noauto" for root device
	[  +2.614377] systemd-fstab-generator[2200]: Ignoring "noauto" for root device
	[  +0.146662] systemd-fstab-generator[2236]: Ignoring "noauto" for root device
	[  +0.095887] systemd-fstab-generator[2249]: Ignoring "noauto" for root device
	[  +0.080164] systemd-fstab-generator[2262]: Ignoring "noauto" for root device
	[ +12.668862] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.235011] systemd-fstab-generator[3017]: Ignoring "noauto" for root device
	[  +0.080466] systemd-fstab-generator[3031]: Ignoring "noauto" for root device
	[  +0.063783] systemd-fstab-generator[3042]: Ignoring "noauto" for root device
	[  +0.079513] systemd-fstab-generator[3056]: Ignoring "noauto" for root device
	[  +2.298859] systemd-fstab-generator[3207]: Ignoring "noauto" for root device
	[  +2.712999] systemd-fstab-generator[3694]: Ignoring "noauto" for root device
	[  +1.577164] systemd-fstab-generator[4051]: Ignoring "noauto" for root device
	[ +17.834949] kauditd_printk_skb: 68 callbacks suppressed
	[Jul 8 20:02] kauditd_printk_skb: 19 callbacks suppressed
	[Jul 8 20:05] systemd-fstab-generator[12193]: Ignoring "noauto" for root device
	[  +5.632683] systemd-fstab-generator[12783]: Ignoring "noauto" for root device
	[  +0.456547] systemd-fstab-generator[12916]: Ignoring "noauto" for root device
	
	
	==> etcd [52eda3d8b3e7] <==
	{"level":"info","ts":"2024-07-08T20:05:39.553Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-07-08T20:05:39.562Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-07-08T20:05:39.562Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-08T20:05:39.564Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-08T20:05:39.562Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-08T20:05:39.564Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-08T20:05:39.564Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-08T20:05:39.719Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-08T20:05:39.719Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-08T20:05:39.719Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-07-08T20:05:39.719Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-07-08T20:05:39.719Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-08T20:05:39.719Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-07-08T20:05:39.719Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-08T20:05:39.719Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T20:05:39.723Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T20:05:39.723Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-129000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-08T20:05:39.723Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-08T20:05:39.723Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-07-08T20:05:39.724Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-08T20:05:39.724Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-08T20:05:39.724Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-08T20:05:39.724Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-08T20:05:39.725Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-08T20:05:39.727Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 20:10:01 up 9 min,  0 users,  load average: 0.07, 0.20, 0.14
	Linux running-upgrade-129000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [063efc38d81d] <==
	I0708 20:05:41.159530       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0708 20:05:41.191002       1 cache.go:39] Caches are synced for autoregister controller
	I0708 20:05:41.191234       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0708 20:05:41.191990       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0708 20:05:41.191997       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0708 20:05:41.192250       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0708 20:05:41.211009       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0708 20:05:41.916853       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0708 20:05:42.096820       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0708 20:05:42.099522       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0708 20:05:42.099544       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0708 20:05:42.227186       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0708 20:05:42.236682       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0708 20:05:42.341853       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0708 20:05:42.344083       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0708 20:05:42.344436       1 controller.go:611] quota admission added evaluator for: endpoints
	I0708 20:05:42.345837       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0708 20:05:43.234504       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0708 20:05:43.805049       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0708 20:05:43.808195       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0708 20:05:43.814549       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0708 20:05:43.868372       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0708 20:05:57.347902       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0708 20:05:57.997567       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0708 20:05:58.508118       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [4829cb3c03a2] <==
	I0708 20:05:57.245999       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0708 20:05:57.246024       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-129000. Assuming now as a timestamp.
	I0708 20:05:57.246048       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0708 20:05:57.245940       1 shared_informer.go:262] Caches are synced for stateful set
	I0708 20:05:57.246204       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0708 20:05:57.246344       1 event.go:294] "Event occurred" object="running-upgrade-129000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-129000 event: Registered Node running-upgrade-129000 in Controller"
	I0708 20:05:57.265638       1 shared_informer.go:262] Caches are synced for node
	I0708 20:05:57.265743       1 range_allocator.go:173] Starting range CIDR allocator
	I0708 20:05:57.265748       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0708 20:05:57.265752       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0708 20:05:57.268604       1 range_allocator.go:374] Set node running-upgrade-129000 PodCIDR to [10.244.0.0/24]
	I0708 20:05:57.288189       1 shared_informer.go:262] Caches are synced for persistent volume
	I0708 20:05:57.296408       1 shared_informer.go:262] Caches are synced for TTL
	I0708 20:05:57.296807       1 shared_informer.go:262] Caches are synced for attach detach
	I0708 20:05:57.296830       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0708 20:05:57.346081       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0708 20:05:57.349023       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0708 20:05:57.349856       1 shared_informer.go:262] Caches are synced for resource quota
	I0708 20:05:57.357113       1 shared_informer.go:262] Caches are synced for resource quota
	I0708 20:05:57.762806       1 shared_informer.go:262] Caches are synced for garbage collector
	I0708 20:05:57.796284       1 shared_informer.go:262] Caches are synced for garbage collector
	I0708 20:05:57.796296       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0708 20:05:58.001549       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-bjttm"
	I0708 20:05:58.148330       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-7cpp6"
	I0708 20:05:58.150851       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-b9f2k"
	
	
	==> kube-proxy [814e848a6031] <==
	I0708 20:05:58.496280       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0708 20:05:58.496312       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0708 20:05:58.496325       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0708 20:05:58.505787       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0708 20:05:58.505797       1 server_others.go:206] "Using iptables Proxier"
	I0708 20:05:58.505821       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0708 20:05:58.505999       1 server.go:661] "Version info" version="v1.24.1"
	I0708 20:05:58.506029       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0708 20:05:58.506370       1 config.go:317] "Starting service config controller"
	I0708 20:05:58.506385       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0708 20:05:58.506421       1 config.go:226] "Starting endpoint slice config controller"
	I0708 20:05:58.506428       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0708 20:05:58.506990       1 config.go:444] "Starting node config controller"
	I0708 20:05:58.507016       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0708 20:05:58.606791       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0708 20:05:58.606810       1 shared_informer.go:262] Caches are synced for service config
	I0708 20:05:58.607281       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [bb65792657e6] <==
	W0708 20:05:41.134097       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0708 20:05:41.134126       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0708 20:05:41.134178       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0708 20:05:41.134212       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0708 20:05:41.134246       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0708 20:05:41.134268       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0708 20:05:41.134309       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0708 20:05:41.134333       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0708 20:05:41.134406       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0708 20:05:41.134430       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0708 20:05:41.134474       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0708 20:05:41.134498       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0708 20:05:41.134740       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0708 20:05:41.134780       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0708 20:05:41.134858       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0708 20:05:41.134891       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0708 20:05:41.134953       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0708 20:05:41.134977       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0708 20:05:41.943475       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0708 20:05:41.943498       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0708 20:05:42.000245       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0708 20:05:42.000290       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0708 20:05:42.061251       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0708 20:05:42.061265       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0708 20:05:42.732888       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-07-08 20:00:43 UTC, ends at Mon 2024-07-08 20:10:01 UTC. --
	Jul 08 20:05:46 running-upgrade-129000 kubelet[12789]: E0708 20:05:46.041570   12789 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-129000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-129000"
	Jul 08 20:05:57 running-upgrade-129000 kubelet[12789]: I0708 20:05:57.251839   12789 topology_manager.go:200] "Topology Admit Handler"
	Jul 08 20:05:57 running-upgrade-129000 kubelet[12789]: I0708 20:05:57.301494   12789 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 08 20:05:57 running-upgrade-129000 kubelet[12789]: I0708 20:05:57.301970   12789 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 08 20:05:57 running-upgrade-129000 kubelet[12789]: I0708 20:05:57.403336   12789 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/707f22a0-688e-4344-8ca1-f4e272e460a2-tmp\") pod \"storage-provisioner\" (UID: \"707f22a0-688e-4344-8ca1-f4e272e460a2\") " pod="kube-system/storage-provisioner"
	Jul 08 20:05:57 running-upgrade-129000 kubelet[12789]: I0708 20:05:57.403374   12789 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbwjz\" (UniqueName: \"kubernetes.io/projected/707f22a0-688e-4344-8ca1-f4e272e460a2-kube-api-access-pbwjz\") pod \"storage-provisioner\" (UID: \"707f22a0-688e-4344-8ca1-f4e272e460a2\") " pod="kube-system/storage-provisioner"
	Jul 08 20:05:57 running-upgrade-129000 kubelet[12789]: E0708 20:05:57.507448   12789 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jul 08 20:05:57 running-upgrade-129000 kubelet[12789]: E0708 20:05:57.507468   12789 projected.go:192] Error preparing data for projected volume kube-api-access-pbwjz for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Jul 08 20:05:57 running-upgrade-129000 kubelet[12789]: E0708 20:05:57.507503   12789 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/707f22a0-688e-4344-8ca1-f4e272e460a2-kube-api-access-pbwjz podName:707f22a0-688e-4344-8ca1-f4e272e460a2 nodeName:}" failed. No retries permitted until 2024-07-08 20:05:58.007489702 +0000 UTC m=+14.213983949 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pbwjz" (UniqueName: "kubernetes.io/projected/707f22a0-688e-4344-8ca1-f4e272e460a2-kube-api-access-pbwjz") pod "storage-provisioner" (UID: "707f22a0-688e-4344-8ca1-f4e272e460a2") : configmap "kube-root-ca.crt" not found
	Jul 08 20:05:58 running-upgrade-129000 kubelet[12789]: I0708 20:05:58.003226   12789 topology_manager.go:200] "Topology Admit Handler"
	Jul 08 20:05:58 running-upgrade-129000 kubelet[12789]: I0708 20:05:58.007855   12789 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d2472209-de58-4844-b260-388428dd5f5d-kube-proxy\") pod \"kube-proxy-bjttm\" (UID: \"d2472209-de58-4844-b260-388428dd5f5d\") " pod="kube-system/kube-proxy-bjttm"
	Jul 08 20:05:58 running-upgrade-129000 kubelet[12789]: I0708 20:05:58.007918   12789 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2472209-de58-4844-b260-388428dd5f5d-lib-modules\") pod \"kube-proxy-bjttm\" (UID: \"d2472209-de58-4844-b260-388428dd5f5d\") " pod="kube-system/kube-proxy-bjttm"
	Jul 08 20:05:58 running-upgrade-129000 kubelet[12789]: I0708 20:05:58.007930   12789 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2472209-de58-4844-b260-388428dd5f5d-xtables-lock\") pod \"kube-proxy-bjttm\" (UID: \"d2472209-de58-4844-b260-388428dd5f5d\") " pod="kube-system/kube-proxy-bjttm"
	Jul 08 20:05:58 running-upgrade-129000 kubelet[12789]: I0708 20:05:58.007949   12789 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck4pw\" (UniqueName: \"kubernetes.io/projected/d2472209-de58-4844-b260-388428dd5f5d-kube-api-access-ck4pw\") pod \"kube-proxy-bjttm\" (UID: \"d2472209-de58-4844-b260-388428dd5f5d\") " pod="kube-system/kube-proxy-bjttm"
	Jul 08 20:05:58 running-upgrade-129000 kubelet[12789]: I0708 20:05:58.150812   12789 topology_manager.go:200] "Topology Admit Handler"
	Jul 08 20:05:58 running-upgrade-129000 kubelet[12789]: I0708 20:05:58.156274   12789 topology_manager.go:200] "Topology Admit Handler"
	Jul 08 20:05:58 running-upgrade-129000 kubelet[12789]: I0708 20:05:58.209240   12789 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfzwk\" (UniqueName: \"kubernetes.io/projected/e4e1b894-17ad-4da0-938e-63c647ae79d1-kube-api-access-cfzwk\") pod \"coredns-6d4b75cb6d-b9f2k\" (UID: \"e4e1b894-17ad-4da0-938e-63c647ae79d1\") " pod="kube-system/coredns-6d4b75cb6d-b9f2k"
	Jul 08 20:05:58 running-upgrade-129000 kubelet[12789]: I0708 20:05:58.209332   12789 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/293c7fca-bcbd-4230-ac8a-27f22a350ecc-config-volume\") pod \"coredns-6d4b75cb6d-7cpp6\" (UID: \"293c7fca-bcbd-4230-ac8a-27f22a350ecc\") " pod="kube-system/coredns-6d4b75cb6d-7cpp6"
	Jul 08 20:05:58 running-upgrade-129000 kubelet[12789]: I0708 20:05:58.209374   12789 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gnb5\" (UniqueName: \"kubernetes.io/projected/293c7fca-bcbd-4230-ac8a-27f22a350ecc-kube-api-access-4gnb5\") pod \"coredns-6d4b75cb6d-7cpp6\" (UID: \"293c7fca-bcbd-4230-ac8a-27f22a350ecc\") " pod="kube-system/coredns-6d4b75cb6d-7cpp6"
	Jul 08 20:05:58 running-upgrade-129000 kubelet[12789]: I0708 20:05:58.209420   12789 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e4e1b894-17ad-4da0-938e-63c647ae79d1-config-volume\") pod \"coredns-6d4b75cb6d-b9f2k\" (UID: \"e4e1b894-17ad-4da0-938e-63c647ae79d1\") " pod="kube-system/coredns-6d4b75cb6d-b9f2k"
	Jul 08 20:05:58 running-upgrade-129000 kubelet[12789]: E0708 20:05:58.965427   12789 remote_runtime.go:578] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: 12a2164c71817eef565b388119f74d696b36e089ac347cc78136e0f565d5447d" containerID="12a2164c71817eef565b388119f74d696b36e089ac347cc78136e0f565d5447d"
	Jul 08 20:05:58 running-upgrade-129000 kubelet[12789]: E0708 20:05:58.965448   12789 kuberuntime_manager.go:1069] "getPodContainerStatuses for pod failed" err="rpc error: code = Unknown desc = Error: No such container: 12a2164c71817eef565b388119f74d696b36e089ac347cc78136e0f565d5447d" pod="kube-system/coredns-6d4b75cb6d-7cpp6"
	Jul 08 20:05:58 running-upgrade-129000 kubelet[12789]: E0708 20:05:58.965456   12789 generic.go:415] "PLEG: Write status" err="rpc error: code = Unknown desc = Error: No such container: 12a2164c71817eef565b388119f74d696b36e089ac347cc78136e0f565d5447d" pod="kube-system/coredns-6d4b75cb6d-7cpp6"
	Jul 08 20:09:47 running-upgrade-129000 kubelet[12789]: I0708 20:09:47.253068   12789 scope.go:110] "RemoveContainer" containerID="f585feadba358b003247503ed47a05df9a2b2c4d2a66672845fb33301c3f7229"
	Jul 08 20:09:47 running-upgrade-129000 kubelet[12789]: I0708 20:09:47.270319   12789 scope.go:110] "RemoveContainer" containerID="12a2164c71817eef565b388119f74d696b36e089ac347cc78136e0f565d5447d"
	
	
	==> storage-provisioner [059ae42247ca] <==
	I0708 20:05:58.394615       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0708 20:05:58.400488       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0708 20:05:58.400558       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0708 20:05:58.407414       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0708 20:05:58.407497       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-129000_acf980d6-50da-4e83-8887-a25f11a4aff2!
	I0708 20:05:58.407936       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"54f1c4c8-a07c-4284-85a6-139dd508f842", APIVersion:"v1", ResourceVersion:"366", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-129000_acf980d6-50da-4e83-8887-a25f11a4aff2 became leader
	I0708 20:05:58.508257       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-129000_acf980d6-50da-4e83-8887-a25f11a4aff2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-129000 -n running-upgrade-129000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-129000 -n running-upgrade-129000: exit status 2 (15.596868875s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-129000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-129000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-129000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-129000: (1.492515333s)
--- FAIL: TestRunningBinaryUpgrade (599.87s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.45s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-644000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-644000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.861025292s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-644000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-644000" primary control-plane node in "kubernetes-upgrade-644000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-644000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 13:03:18.742845    4007 out.go:291] Setting OutFile to fd 1 ...
	I0708 13:03:18.742974    4007 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:03:18.742978    4007 out.go:304] Setting ErrFile to fd 2...
	I0708 13:03:18.742980    4007 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:03:18.743120    4007 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 13:03:18.744215    4007 out.go:298] Setting JSON to false
	I0708 13:03:18.760417    4007 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3766,"bootTime":1720465232,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0708 13:03:18.760533    4007 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0708 13:03:18.766973    4007 out.go:177] * [kubernetes-upgrade-644000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0708 13:03:18.774875    4007 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 13:03:18.774940    4007 notify.go:220] Checking for updates...
	I0708 13:03:18.780794    4007 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 13:03:18.783804    4007 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0708 13:03:18.785075    4007 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 13:03:18.787836    4007 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	I0708 13:03:18.790808    4007 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 13:03:18.794105    4007 config.go:182] Loaded profile config "multinode-969000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 13:03:18.794170    4007 config.go:182] Loaded profile config "running-upgrade-129000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0708 13:03:18.794224    4007 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 13:03:18.798845    4007 out.go:177] * Using the qemu2 driver based on user configuration
	I0708 13:03:18.805836    4007 start.go:297] selected driver: qemu2
	I0708 13:03:18.805849    4007 start.go:901] validating driver "qemu2" against <nil>
	I0708 13:03:18.805855    4007 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 13:03:18.808004    4007 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0708 13:03:18.810809    4007 out.go:177] * Automatically selected the socket_vmnet network
	I0708 13:03:18.813862    4007 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0708 13:03:18.813874    4007 cni.go:84] Creating CNI manager for ""
	I0708 13:03:18.813883    4007 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0708 13:03:18.813913    4007 start.go:340] cluster config:
	{Name:kubernetes-upgrade-644000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-644000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 13:03:18.817319    4007 iso.go:125] acquiring lock: {Name:mk0270d312faa6a295feea241390baaf586d8510 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 13:03:18.824753    4007 out.go:177] * Starting "kubernetes-upgrade-644000" primary control-plane node in "kubernetes-upgrade-644000" cluster
	I0708 13:03:18.828790    4007 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0708 13:03:18.828808    4007 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0708 13:03:18.828816    4007 cache.go:56] Caching tarball of preloaded images
	I0708 13:03:18.828884    4007 preload.go:173] Found /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0708 13:03:18.828891    4007 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0708 13:03:18.828955    4007 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/kubernetes-upgrade-644000/config.json ...
	I0708 13:03:18.828966    4007 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/kubernetes-upgrade-644000/config.json: {Name:mkdc109417940e5cf43863a057c2d7e33c068a3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 13:03:18.829272    4007 start.go:360] acquireMachinesLock for kubernetes-upgrade-644000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 13:03:18.829307    4007 start.go:364] duration metric: took 25.791µs to acquireMachinesLock for "kubernetes-upgrade-644000"
	I0708 13:03:18.829317    4007 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-644000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-644000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 13:03:18.829339    4007 start.go:125] createHost starting for "" (driver="qemu2")
	I0708 13:03:18.837832    4007 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0708 13:03:18.864416    4007 start.go:159] libmachine.API.Create for "kubernetes-upgrade-644000" (driver="qemu2")
	I0708 13:03:18.864447    4007 client.go:168] LocalClient.Create starting
	I0708 13:03:18.864510    4007 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem
	I0708 13:03:18.864546    4007 main.go:141] libmachine: Decoding PEM data...
	I0708 13:03:18.864557    4007 main.go:141] libmachine: Parsing certificate...
	I0708 13:03:18.864598    4007 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem
	I0708 13:03:18.864623    4007 main.go:141] libmachine: Decoding PEM data...
	I0708 13:03:18.864632    4007 main.go:141] libmachine: Parsing certificate...
	I0708 13:03:18.865035    4007 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19195-1270/.minikube/cache/iso/arm64/minikube-v1.33.1-1720011972-19186-arm64.iso...
	I0708 13:03:19.027609    4007 main.go:141] libmachine: Creating SSH key...
	I0708 13:03:19.150542    4007 main.go:141] libmachine: Creating Disk image...
	I0708 13:03:19.150550    4007 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0708 13:03:19.150751    4007 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kubernetes-upgrade-644000/disk.qcow2.raw /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kubernetes-upgrade-644000/disk.qcow2
	I0708 13:03:19.160730    4007 main.go:141] libmachine: STDOUT: 
	I0708 13:03:19.160750    4007 main.go:141] libmachine: STDERR: 
	I0708 13:03:19.160801    4007 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kubernetes-upgrade-644000/disk.qcow2 +20000M
	I0708 13:03:19.169063    4007 main.go:141] libmachine: STDOUT: Image resized.
	
	I0708 13:03:19.169079    4007 main.go:141] libmachine: STDERR: 
	I0708 13:03:19.169091    4007 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kubernetes-upgrade-644000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kubernetes-upgrade-644000/disk.qcow2
	I0708 13:03:19.169096    4007 main.go:141] libmachine: Starting QEMU VM...
	I0708 13:03:19.169127    4007 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kubernetes-upgrade-644000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kubernetes-upgrade-644000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kubernetes-upgrade-644000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:85:6d:eb:fa:0b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kubernetes-upgrade-644000/disk.qcow2
	I0708 13:03:19.170761    4007 main.go:141] libmachine: STDOUT: 
	I0708 13:03:19.170778    4007 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 13:03:19.170799    4007 client.go:171] duration metric: took 306.355ms to LocalClient.Create
	I0708 13:03:21.172944    4007 start.go:128] duration metric: took 2.343644667s to createHost
	I0708 13:03:21.173003    4007 start.go:83] releasing machines lock for "kubernetes-upgrade-644000", held for 2.343756s
	W0708 13:03:21.173066    4007 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:03:21.182624    4007 out.go:177] * Deleting "kubernetes-upgrade-644000" in qemu2 ...
	W0708 13:03:21.214575    4007 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:03:21.214604    4007 start.go:728] Will try again in 5 seconds ...
	I0708 13:03:26.216699    4007 start.go:360] acquireMachinesLock for kubernetes-upgrade-644000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 13:03:26.217141    4007 start.go:364] duration metric: took 352.042µs to acquireMachinesLock for "kubernetes-upgrade-644000"
	I0708 13:03:26.217282    4007 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-644000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-644000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 13:03:26.217494    4007 start.go:125] createHost starting for "" (driver="qemu2")
	I0708 13:03:26.226031    4007 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0708 13:03:26.274245    4007 start.go:159] libmachine.API.Create for "kubernetes-upgrade-644000" (driver="qemu2")
	I0708 13:03:26.274304    4007 client.go:168] LocalClient.Create starting
	I0708 13:03:26.274438    4007 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem
	I0708 13:03:26.274502    4007 main.go:141] libmachine: Decoding PEM data...
	I0708 13:03:26.274518    4007 main.go:141] libmachine: Parsing certificate...
	I0708 13:03:26.274580    4007 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem
	I0708 13:03:26.274628    4007 main.go:141] libmachine: Decoding PEM data...
	I0708 13:03:26.274642    4007 main.go:141] libmachine: Parsing certificate...
	I0708 13:03:26.275389    4007 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19195-1270/.minikube/cache/iso/arm64/minikube-v1.33.1-1720011972-19186-arm64.iso...
	I0708 13:03:26.432257    4007 main.go:141] libmachine: Creating SSH key...
	I0708 13:03:26.520326    4007 main.go:141] libmachine: Creating Disk image...
	I0708 13:03:26.520336    4007 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0708 13:03:26.520535    4007 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kubernetes-upgrade-644000/disk.qcow2.raw /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kubernetes-upgrade-644000/disk.qcow2
	I0708 13:03:26.530040    4007 main.go:141] libmachine: STDOUT: 
	I0708 13:03:26.530058    4007 main.go:141] libmachine: STDERR: 
	I0708 13:03:26.530112    4007 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kubernetes-upgrade-644000/disk.qcow2 +20000M
	I0708 13:03:26.538280    4007 main.go:141] libmachine: STDOUT: Image resized.
	
	I0708 13:03:26.538293    4007 main.go:141] libmachine: STDERR: 
	I0708 13:03:26.538310    4007 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kubernetes-upgrade-644000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kubernetes-upgrade-644000/disk.qcow2
	I0708 13:03:26.538314    4007 main.go:141] libmachine: Starting QEMU VM...
	I0708 13:03:26.538347    4007 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kubernetes-upgrade-644000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kubernetes-upgrade-644000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kubernetes-upgrade-644000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:86:2b:6b:3b:5f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kubernetes-upgrade-644000/disk.qcow2
	I0708 13:03:26.540048    4007 main.go:141] libmachine: STDOUT: 
	I0708 13:03:26.540066    4007 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 13:03:26.540079    4007 client.go:171] duration metric: took 265.775375ms to LocalClient.Create
	I0708 13:03:28.542150    4007 start.go:128] duration metric: took 2.324702458s to createHost
	I0708 13:03:28.542194    4007 start.go:83] releasing machines lock for "kubernetes-upgrade-644000", held for 2.325097375s
	W0708 13:03:28.542412    4007 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-644000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-644000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:03:28.550899    4007 out.go:177] 
	W0708 13:03:28.554888    4007 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0708 13:03:28.554925    4007 out.go:239] * 
	* 
	W0708 13:03:28.556142    4007 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 13:03:28.566749    4007 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-644000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-644000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-644000: (3.170275291s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-644000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-644000 status --format={{.Host}}: exit status 7 (48.323625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-644000 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-644000 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.190836083s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-644000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-644000" primary control-plane node in "kubernetes-upgrade-644000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-644000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-644000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 13:03:31.826834    4042 out.go:291] Setting OutFile to fd 1 ...
	I0708 13:03:31.826970    4042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:03:31.826973    4042 out.go:304] Setting ErrFile to fd 2...
	I0708 13:03:31.826975    4042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:03:31.827108    4042 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 13:03:31.828169    4042 out.go:298] Setting JSON to false
	I0708 13:03:31.844310    4042 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3779,"bootTime":1720465232,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0708 13:03:31.844379    4042 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0708 13:03:31.849757    4042 out.go:177] * [kubernetes-upgrade-644000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0708 13:03:31.856728    4042 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 13:03:31.856775    4042 notify.go:220] Checking for updates...
	I0708 13:03:31.864736    4042 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 13:03:31.867747    4042 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0708 13:03:31.870792    4042 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 13:03:31.873811    4042 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	I0708 13:03:31.876716    4042 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 13:03:31.879999    4042 config.go:182] Loaded profile config "kubernetes-upgrade-644000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0708 13:03:31.880266    4042 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 13:03:31.887751    4042 out.go:177] * Using the qemu2 driver based on existing profile
	I0708 13:03:31.894769    4042 start.go:297] selected driver: qemu2
	I0708 13:03:31.894774    4042 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-644000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-644000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 13:03:31.894834    4042 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 13:03:31.897347    4042 cni.go:84] Creating CNI manager for ""
	I0708 13:03:31.897364    4042 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0708 13:03:31.897405    4042 start.go:340] cluster config:
	{Name:kubernetes-upgrade-644000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:kubernetes-upgrade-644000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 13:03:31.901118    4042 iso.go:125] acquiring lock: {Name:mk0270d312faa6a295feea241390baaf586d8510 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 13:03:31.908764    4042 out.go:177] * Starting "kubernetes-upgrade-644000" primary control-plane node in "kubernetes-upgrade-644000" cluster
	I0708 13:03:31.912777    4042 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0708 13:03:31.912794    4042 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0708 13:03:31.912805    4042 cache.go:56] Caching tarball of preloaded images
	I0708 13:03:31.912864    4042 preload.go:173] Found /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0708 13:03:31.912872    4042 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0708 13:03:31.912933    4042 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/kubernetes-upgrade-644000/config.json ...
	I0708 13:03:31.913406    4042 start.go:360] acquireMachinesLock for kubernetes-upgrade-644000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 13:03:31.913442    4042 start.go:364] duration metric: took 30.25µs to acquireMachinesLock for "kubernetes-upgrade-644000"
	I0708 13:03:31.913451    4042 start.go:96] Skipping create...Using existing machine configuration
	I0708 13:03:31.913457    4042 fix.go:54] fixHost starting: 
	I0708 13:03:31.913571    4042 fix.go:112] recreateIfNeeded on kubernetes-upgrade-644000: state=Stopped err=<nil>
	W0708 13:03:31.913579    4042 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 13:03:31.917679    4042 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-644000" ...
	I0708 13:03:31.924688    4042 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kubernetes-upgrade-644000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kubernetes-upgrade-644000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kubernetes-upgrade-644000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:86:2b:6b:3b:5f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kubernetes-upgrade-644000/disk.qcow2
	I0708 13:03:31.926802    4042 main.go:141] libmachine: STDOUT: 
	I0708 13:03:31.926821    4042 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 13:03:31.926850    4042 fix.go:56] duration metric: took 13.39425ms for fixHost
	I0708 13:03:31.926855    4042 start.go:83] releasing machines lock for "kubernetes-upgrade-644000", held for 13.408417ms
	W0708 13:03:31.926861    4042 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0708 13:03:31.926904    4042 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:03:31.926909    4042 start.go:728] Will try again in 5 seconds ...
	I0708 13:03:36.928984    4042 start.go:360] acquireMachinesLock for kubernetes-upgrade-644000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 13:03:36.929545    4042 start.go:364] duration metric: took 472.083µs to acquireMachinesLock for "kubernetes-upgrade-644000"
	I0708 13:03:36.929699    4042 start.go:96] Skipping create...Using existing machine configuration
	I0708 13:03:36.929721    4042 fix.go:54] fixHost starting: 
	I0708 13:03:36.930487    4042 fix.go:112] recreateIfNeeded on kubernetes-upgrade-644000: state=Stopped err=<nil>
	W0708 13:03:36.930517    4042 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 13:03:36.938871    4042 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-644000" ...
	I0708 13:03:36.942914    4042 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kubernetes-upgrade-644000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kubernetes-upgrade-644000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kubernetes-upgrade-644000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:86:2b:6b:3b:5f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kubernetes-upgrade-644000/disk.qcow2
	I0708 13:03:36.953127    4042 main.go:141] libmachine: STDOUT: 
	I0708 13:03:36.953196    4042 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 13:03:36.953286    4042 fix.go:56] duration metric: took 23.567708ms for fixHost
	I0708 13:03:36.953304    4042 start.go:83] releasing machines lock for "kubernetes-upgrade-644000", held for 23.735375ms
	W0708 13:03:36.953501    4042 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-644000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-644000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:03:36.961921    4042 out.go:177] 
	W0708 13:03:36.965975    4042 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0708 13:03:36.966003    4042 out.go:239] * 
	* 
	W0708 13:03:36.967432    4042 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 13:03:36.976930    4042 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-644000 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-644000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-644000 version --output=json: exit status 1 (56.691416ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-644000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-07-08 13:03:37.046626 -0700 PDT m=+2116.506550376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-644000 -n kubernetes-upgrade-644000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-644000 -n kubernetes-upgrade-644000: exit status 7 (34.054584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-644000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-644000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-644000
--- FAIL: TestKubernetesUpgrade (18.45s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.3s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19195
- KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1538678424/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.30s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.03s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19195
- KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3579189279/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (573.08s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2267748420 start -p stopped-upgrade-170000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2267748420 start -p stopped-upgrade-170000 --memory=2200 --vm-driver=qemu2 : (39.267993375s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2267748420 -p stopped-upgrade-170000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2267748420 -p stopped-upgrade-170000 stop: (12.128204166s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-170000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0708 13:05:52.009244    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/addons-443000/client.crt: no such file or directory
E0708 13:07:16.007039    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/functional-183000/client.crt: no such file or directory
E0708 13:08:55.058640    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/addons-443000/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-170000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.575389208s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-170000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-170000" primary control-plane node in "stopped-upgrade-170000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-170000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 13:04:29.633274    4087 out.go:291] Setting OutFile to fd 1 ...
	I0708 13:04:29.633482    4087 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:04:29.633486    4087 out.go:304] Setting ErrFile to fd 2...
	I0708 13:04:29.633489    4087 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:04:29.633654    4087 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 13:04:29.634864    4087 out.go:298] Setting JSON to false
	I0708 13:04:29.654058    4087 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3837,"bootTime":1720465232,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0708 13:04:29.654129    4087 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0708 13:04:29.659274    4087 out.go:177] * [stopped-upgrade-170000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0708 13:04:29.667245    4087 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 13:04:29.667272    4087 notify.go:220] Checking for updates...
	I0708 13:04:29.674235    4087 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 13:04:29.677251    4087 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0708 13:04:29.680315    4087 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 13:04:29.683161    4087 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	I0708 13:04:29.686276    4087 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 13:04:29.689572    4087 config.go:182] Loaded profile config "stopped-upgrade-170000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0708 13:04:29.691136    4087 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0708 13:04:29.694245    4087 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 13:04:29.698358    4087 out.go:177] * Using the qemu2 driver based on existing profile
	I0708 13:04:29.703216    4087 start.go:297] selected driver: qemu2
	I0708 13:04:29.703222    4087 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-170000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50600 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-170000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0708 13:04:29.703271    4087 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 13:04:29.706066    4087 cni.go:84] Creating CNI manager for ""
	I0708 13:04:29.706085    4087 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0708 13:04:29.706117    4087 start.go:340] cluster config:
	{Name:stopped-upgrade-170000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50600 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-170000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0708 13:04:29.706171    4087 iso.go:125] acquiring lock: {Name:mk0270d312faa6a295feea241390baaf586d8510 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 13:04:29.713197    4087 out.go:177] * Starting "stopped-upgrade-170000" primary control-plane node in "stopped-upgrade-170000" cluster
	I0708 13:04:29.719302    4087 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0708 13:04:29.719326    4087 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0708 13:04:29.719332    4087 cache.go:56] Caching tarball of preloaded images
	I0708 13:04:29.719399    4087 preload.go:173] Found /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0708 13:04:29.719405    4087 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0708 13:04:29.719453    4087 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/stopped-upgrade-170000/config.json ...
	I0708 13:04:29.719733    4087 start.go:360] acquireMachinesLock for stopped-upgrade-170000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 13:04:29.719767    4087 start.go:364] duration metric: took 27.459µs to acquireMachinesLock for "stopped-upgrade-170000"
	I0708 13:04:29.719775    4087 start.go:96] Skipping create...Using existing machine configuration
	I0708 13:04:29.719780    4087 fix.go:54] fixHost starting: 
	I0708 13:04:29.719890    4087 fix.go:112] recreateIfNeeded on stopped-upgrade-170000: state=Stopped err=<nil>
	W0708 13:04:29.719898    4087 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 13:04:29.724240    4087 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-170000" ...
	I0708 13:04:29.732310    4087 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/stopped-upgrade-170000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/stopped-upgrade-170000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/stopped-upgrade-170000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50565-:22,hostfwd=tcp::50566-:2376,hostname=stopped-upgrade-170000 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/stopped-upgrade-170000/disk.qcow2
	I0708 13:04:29.777201    4087 main.go:141] libmachine: STDOUT: 
	I0708 13:04:29.777234    4087 main.go:141] libmachine: STDERR: 
	I0708 13:04:29.777241    4087 main.go:141] libmachine: Waiting for VM to start (ssh -p 50565 docker@127.0.0.1)...
	I0708 13:04:49.625945    4087 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/stopped-upgrade-170000/config.json ...
	I0708 13:04:49.626782    4087 machine.go:94] provisionDockerMachine start ...
	I0708 13:04:49.627043    4087 main.go:141] libmachine: Using SSH client type: native
	I0708 13:04:49.627598    4087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10460e920] 0x104611180 <nil>  [] 0s} localhost 50565 <nil> <nil>}
	I0708 13:04:49.627614    4087 main.go:141] libmachine: About to run SSH command:
	hostname
	I0708 13:04:49.726616    4087 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0708 13:04:49.726656    4087 buildroot.go:166] provisioning hostname "stopped-upgrade-170000"
	I0708 13:04:49.726776    4087 main.go:141] libmachine: Using SSH client type: native
	I0708 13:04:49.727021    4087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10460e920] 0x104611180 <nil>  [] 0s} localhost 50565 <nil> <nil>}
	I0708 13:04:49.727032    4087 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-170000 && echo "stopped-upgrade-170000" | sudo tee /etc/hostname
	I0708 13:04:49.812087    4087 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-170000
	
	I0708 13:04:49.812156    4087 main.go:141] libmachine: Using SSH client type: native
	I0708 13:04:49.812309    4087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10460e920] 0x104611180 <nil>  [] 0s} localhost 50565 <nil> <nil>}
	I0708 13:04:49.812319    4087 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-170000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-170000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-170000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 13:04:49.890645    4087 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0708 13:04:49.890657    4087 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19195-1270/.minikube CaCertPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19195-1270/.minikube}
	I0708 13:04:49.890678    4087 buildroot.go:174] setting up certificates
	I0708 13:04:49.890687    4087 provision.go:84] configureAuth start
	I0708 13:04:49.890692    4087 provision.go:143] copyHostCerts
	I0708 13:04:49.890767    4087 exec_runner.go:144] found /Users/jenkins/minikube-integration/19195-1270/.minikube/cert.pem, removing ...
	I0708 13:04:49.890773    4087 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19195-1270/.minikube/cert.pem
	I0708 13:04:49.890904    4087 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19195-1270/.minikube/cert.pem (1123 bytes)
	I0708 13:04:49.891126    4087 exec_runner.go:144] found /Users/jenkins/minikube-integration/19195-1270/.minikube/key.pem, removing ...
	I0708 13:04:49.891130    4087 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19195-1270/.minikube/key.pem
	I0708 13:04:49.891185    4087 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19195-1270/.minikube/key.pem (1675 bytes)
	I0708 13:04:49.891304    4087 exec_runner.go:144] found /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.pem, removing ...
	I0708 13:04:49.891307    4087 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.pem
	I0708 13:04:49.891358    4087 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.pem (1078 bytes)
	I0708 13:04:49.891458    4087 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-170000 san=[127.0.0.1 localhost minikube stopped-upgrade-170000]
	I0708 13:04:50.001283    4087 provision.go:177] copyRemoteCerts
	I0708 13:04:50.001320    4087 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 13:04:50.001329    4087 sshutil.go:53] new ssh client: &{IP:localhost Port:50565 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/stopped-upgrade-170000/id_rsa Username:docker}
	I0708 13:04:50.039622    4087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 13:04:50.046195    4087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0708 13:04:50.053323    4087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0708 13:04:50.060533    4087 provision.go:87] duration metric: took 169.845125ms to configureAuth
	I0708 13:04:50.060542    4087 buildroot.go:189] setting minikube options for container-runtime
	I0708 13:04:50.060653    4087 config.go:182] Loaded profile config "stopped-upgrade-170000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0708 13:04:50.060690    4087 main.go:141] libmachine: Using SSH client type: native
	I0708 13:04:50.060788    4087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10460e920] 0x104611180 <nil>  [] 0s} localhost 50565 <nil> <nil>}
	I0708 13:04:50.060792    4087 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0708 13:04:50.133884    4087 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0708 13:04:50.133895    4087 buildroot.go:70] root file system type: tmpfs
	I0708 13:04:50.133953    4087 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0708 13:04:50.134011    4087 main.go:141] libmachine: Using SSH client type: native
	I0708 13:04:50.134134    4087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10460e920] 0x104611180 <nil>  [] 0s} localhost 50565 <nil> <nil>}
	I0708 13:04:50.134167    4087 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0708 13:04:50.211500    4087 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0708 13:04:50.211561    4087 main.go:141] libmachine: Using SSH client type: native
	I0708 13:04:50.211698    4087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10460e920] 0x104611180 <nil>  [] 0s} localhost 50565 <nil> <nil>}
	I0708 13:04:50.211706    4087 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0708 13:04:50.598449    4087 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0708 13:04:50.598463    4087 machine.go:97] duration metric: took 971.691583ms to provisionDockerMachine
	I0708 13:04:50.598473    4087 start.go:293] postStartSetup for "stopped-upgrade-170000" (driver="qemu2")
	I0708 13:04:50.598479    4087 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 13:04:50.598543    4087 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 13:04:50.598553    4087 sshutil.go:53] new ssh client: &{IP:localhost Port:50565 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/stopped-upgrade-170000/id_rsa Username:docker}
	I0708 13:04:50.638714    4087 ssh_runner.go:195] Run: cat /etc/os-release
	I0708 13:04:50.640018    4087 info.go:137] Remote host: Buildroot 2021.02.12
	I0708 13:04:50.640025    4087 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19195-1270/.minikube/addons for local assets ...
	I0708 13:04:50.640117    4087 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19195-1270/.minikube/files for local assets ...
	I0708 13:04:50.640241    4087 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem -> 17672.pem in /etc/ssl/certs
	I0708 13:04:50.640364    4087 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0708 13:04:50.642735    4087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem --> /etc/ssl/certs/17672.pem (1708 bytes)
	I0708 13:04:50.649911    4087 start.go:296] duration metric: took 51.43525ms for postStartSetup
	I0708 13:04:50.649941    4087 fix.go:56] duration metric: took 20.930760042s for fixHost
	I0708 13:04:50.649975    4087 main.go:141] libmachine: Using SSH client type: native
	I0708 13:04:50.650085    4087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10460e920] 0x104611180 <nil>  [] 0s} localhost 50565 <nil> <nil>}
	I0708 13:04:50.650090    4087 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0708 13:04:50.726243    4087 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720469091.136840587
	
	I0708 13:04:50.726251    4087 fix.go:216] guest clock: 1720469091.136840587
	I0708 13:04:50.726259    4087 fix.go:229] Guest: 2024-07-08 13:04:51.136840587 -0700 PDT Remote: 2024-07-08 13:04:50.649944 -0700 PDT m=+21.047731959 (delta=486.896587ms)
	I0708 13:04:50.726275    4087 fix.go:200] guest clock delta is within tolerance: 486.896587ms
	I0708 13:04:50.726279    4087 start.go:83] releasing machines lock for "stopped-upgrade-170000", held for 21.007108959s
	I0708 13:04:50.726336    4087 ssh_runner.go:195] Run: cat /version.json
	I0708 13:04:50.726345    4087 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0708 13:04:50.726345    4087 sshutil.go:53] new ssh client: &{IP:localhost Port:50565 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/stopped-upgrade-170000/id_rsa Username:docker}
	I0708 13:04:50.726359    4087 sshutil.go:53] new ssh client: &{IP:localhost Port:50565 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/stopped-upgrade-170000/id_rsa Username:docker}
	W0708 13:04:50.726864    4087 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50687->127.0.0.1:50565: read: connection reset by peer
	I0708 13:04:50.726882    4087 retry.go:31] will retry after 289.126214ms: ssh: handshake failed: read tcp 127.0.0.1:50687->127.0.0.1:50565: read: connection reset by peer
	W0708 13:04:50.764384    4087 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0708 13:04:50.764429    4087 ssh_runner.go:195] Run: systemctl --version
	I0708 13:04:50.766128    4087 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0708 13:04:50.767767    4087 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0708 13:04:50.767803    4087 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0708 13:04:50.770430    4087 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0708 13:04:50.775407    4087 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0708 13:04:50.775417    4087 start.go:494] detecting cgroup driver to use...
	I0708 13:04:50.775501    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 13:04:50.782270    4087 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0708 13:04:50.785708    4087 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0708 13:04:50.788965    4087 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0708 13:04:50.789000    4087 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0708 13:04:50.792086    4087 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0708 13:04:50.795460    4087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0708 13:04:50.798971    4087 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0708 13:04:50.802183    4087 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0708 13:04:50.805453    4087 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0708 13:04:50.808367    4087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0708 13:04:50.811020    4087 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0708 13:04:50.814205    4087 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 13:04:50.817205    4087 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 13:04:50.819933    4087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 13:04:50.898515    4087 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0708 13:04:50.909378    4087 start.go:494] detecting cgroup driver to use...
	I0708 13:04:50.909444    4087 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0708 13:04:50.916569    4087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 13:04:50.921512    4087 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0708 13:04:50.927653    4087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0708 13:04:50.932064    4087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0708 13:04:50.936399    4087 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0708 13:04:50.997831    4087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0708 13:04:51.003405    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 13:04:51.009899    4087 ssh_runner.go:195] Run: which cri-dockerd
	I0708 13:04:51.011358    4087 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0708 13:04:51.014983    4087 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0708 13:04:51.021830    4087 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0708 13:04:51.105700    4087 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0708 13:04:51.190754    4087 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0708 13:04:51.190820    4087 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0708 13:04:51.200344    4087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 13:04:51.290608    4087 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0708 13:04:52.420242    4087 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.12963575s)
	I0708 13:04:52.420321    4087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0708 13:04:52.425066    4087 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0708 13:04:52.431400    4087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0708 13:04:52.436201    4087 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0708 13:04:52.512103    4087 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0708 13:04:52.592764    4087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 13:04:52.672687    4087 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0708 13:04:52.678271    4087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0708 13:04:52.683080    4087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 13:04:52.763908    4087 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0708 13:04:52.802537    4087 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0708 13:04:52.802608    4087 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0708 13:04:52.804642    4087 start.go:562] Will wait 60s for crictl version
	I0708 13:04:52.804695    4087 ssh_runner.go:195] Run: which crictl
	I0708 13:04:52.806215    4087 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0708 13:04:52.821528    4087 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0708 13:04:52.821592    4087 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0708 13:04:52.838457    4087 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0708 13:04:52.860236    4087 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0708 13:04:52.860356    4087 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0708 13:04:52.861620    4087 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 13:04:52.865723    4087 kubeadm.go:877] updating cluster {Name:stopped-upgrade-170000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50600 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-170000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0708 13:04:52.865767    4087 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0708 13:04:52.865807    4087 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0708 13:04:52.880207    4087 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0708 13:04:52.880220    4087 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0708 13:04:52.880263    4087 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0708 13:04:52.883294    4087 ssh_runner.go:195] Run: which lz4
	I0708 13:04:52.884576    4087 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0708 13:04:52.885733    4087 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0708 13:04:52.885743    4087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0708 13:04:53.825510    4087 docker.go:649] duration metric: took 940.993ms to copy over tarball
	I0708 13:04:53.825567    4087 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0708 13:04:55.003584    4087 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.178028166s)
	I0708 13:04:55.003601    4087 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0708 13:04:55.019041    4087 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0708 13:04:55.022237    4087 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0708 13:04:55.027177    4087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 13:04:55.107571    4087 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0708 13:04:56.620818    4087 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.513274584s)
	I0708 13:04:56.620916    4087 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0708 13:04:56.639336    4087 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0708 13:04:56.639346    4087 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0708 13:04:56.639352    4087 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0708 13:04:56.643896    4087 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 13:04:56.645471    4087 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0708 13:04:56.647454    4087 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 13:04:56.647515    4087 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0708 13:04:56.649017    4087 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0708 13:04:56.649122    4087 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0708 13:04:56.650669    4087 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0708 13:04:56.650794    4087 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0708 13:04:56.652113    4087 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0708 13:04:56.652203    4087 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0708 13:04:56.653169    4087 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0708 13:04:56.653261    4087 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0708 13:04:56.654190    4087 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0708 13:04:56.654275    4087 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0708 13:04:56.655107    4087 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0708 13:04:56.655739    4087 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0708 13:04:57.108505    4087 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0708 13:04:57.120747    4087 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0708 13:04:57.120769    4087 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0708 13:04:57.120818    4087 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	W0708 13:04:57.122814    4087 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0708 13:04:57.122901    4087 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0708 13:04:57.131982    4087 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0708 13:04:57.139422    4087 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0708 13:04:57.139449    4087 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0708 13:04:57.139501    4087 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0708 13:04:57.149569    4087 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0708 13:04:57.149675    4087 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0708 13:04:57.151304    4087 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0708 13:04:57.151316    4087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0708 13:04:57.155019    4087 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0708 13:04:57.165438    4087 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0708 13:04:57.174412    4087 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0708 13:04:57.174434    4087 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0708 13:04:57.174489    4087 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0708 13:04:57.177665    4087 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0708 13:04:57.196516    4087 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0708 13:04:57.197921    4087 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0708 13:04:57.197941    4087 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0708 13:04:57.197982    4087 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0708 13:04:57.207981    4087 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0708 13:04:57.208090    4087 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0708 13:04:57.217298    4087 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0708 13:04:57.217321    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0708 13:04:57.228243    4087 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0708 13:04:57.228266    4087 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0708 13:04:57.228297    4087 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0708 13:04:57.228307    4087 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0708 13:04:57.228318    4087 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0708 13:04:57.228332    4087 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0708 13:04:57.231915    4087 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	W0708 13:04:57.239850    4087 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0708 13:04:57.239962    4087 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 13:04:57.247030    4087 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0708 13:04:57.247146    4087 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0708 13:04:57.247158    4087 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0708 13:04:57.247190    4087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0708 13:04:57.294871    4087 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0708 13:04:57.294897    4087 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0708 13:04:57.294902    4087 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0708 13:04:57.294903    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0708 13:04:57.294935    4087 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0708 13:04:57.295445    4087 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0708 13:04:57.295456    4087 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0708 13:04:57.295461    4087 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0708 13:04:57.295466    4087 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 13:04:57.295508    4087 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 13:04:57.295508    4087 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0708 13:04:57.295543    4087 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0708 13:04:57.295559    4087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0708 13:04:57.371864    4087 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0708 13:04:57.371894    4087 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0708 13:04:57.371900    4087 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0708 13:04:57.372004    4087 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0708 13:04:57.378851    4087 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0708 13:04:57.378874    4087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0708 13:04:57.446240    4087 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0708 13:04:57.446254    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0708 13:04:57.827739    4087 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0708 13:04:57.827766    4087 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0708 13:04:57.827771    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0708 13:04:57.980688    4087 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0708 13:04:57.980730    4087 cache_images.go:92] duration metric: took 1.34141s to LoadCachedImages
	W0708 13:04:57.980771    4087 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0708 13:04:57.980777    4087 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0708 13:04:57.980830    4087 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-170000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-170000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0708 13:04:57.980900    4087 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0708 13:04:57.994213    4087 cni.go:84] Creating CNI manager for ""
	I0708 13:04:57.994228    4087 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0708 13:04:57.994234    4087 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0708 13:04:57.994243    4087 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-170000 NodeName:stopped-upgrade-170000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0708 13:04:57.994313    4087 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-170000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0708 13:04:57.994369    4087 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0708 13:04:57.997711    4087 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 13:04:57.997742    4087 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0708 13:04:58.000913    4087 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0708 13:04:58.005940    4087 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 13:04:58.010850    4087 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0708 13:04:58.016017    4087 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0708 13:04:58.017312    4087 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 13:04:58.021342    4087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 13:04:58.101905    4087 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 13:04:58.107525    4087 certs.go:68] Setting up /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/stopped-upgrade-170000 for IP: 10.0.2.15
	I0708 13:04:58.107535    4087 certs.go:194] generating shared ca certs ...
	I0708 13:04:58.107543    4087 certs.go:226] acquiring lock for ca certs: {Name:mka13b605a6983b2618b91f3a0bdec43c132a4e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 13:04:58.107709    4087 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.key
	I0708 13:04:58.107954    4087 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/proxy-client-ca.key
	I0708 13:04:58.107964    4087 certs.go:256] generating profile certs ...
	I0708 13:04:58.108179    4087 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/stopped-upgrade-170000/client.key
	I0708 13:04:58.108197    4087 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/stopped-upgrade-170000/apiserver.key.c425be07
	I0708 13:04:58.108209    4087 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/stopped-upgrade-170000/apiserver.crt.c425be07 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0708 13:04:58.263782    4087 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/stopped-upgrade-170000/apiserver.crt.c425be07 ...
	I0708 13:04:58.263797    4087 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/stopped-upgrade-170000/apiserver.crt.c425be07: {Name:mk115bf0da0e1aa0b5826bc251335868038dfc84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 13:04:58.264306    4087 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/stopped-upgrade-170000/apiserver.key.c425be07 ...
	I0708 13:04:58.264314    4087 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/stopped-upgrade-170000/apiserver.key.c425be07: {Name:mkffaac2e55ffdfdcc2f53b96f73fb178800d26f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 13:04:58.264468    4087 certs.go:381] copying /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/stopped-upgrade-170000/apiserver.crt.c425be07 -> /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/stopped-upgrade-170000/apiserver.crt
	I0708 13:04:58.264751    4087 certs.go:385] copying /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/stopped-upgrade-170000/apiserver.key.c425be07 -> /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/stopped-upgrade-170000/apiserver.key
	I0708 13:04:58.265020    4087 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/stopped-upgrade-170000/proxy-client.key
	I0708 13:04:58.265162    4087 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/1767.pem (1338 bytes)
	W0708 13:04:58.265299    4087 certs.go:480] ignoring /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/1767_empty.pem, impossibly tiny 0 bytes
	I0708 13:04:58.265307    4087 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca-key.pem (1679 bytes)
	I0708 13:04:58.265336    4087 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem (1078 bytes)
	I0708 13:04:58.265360    4087 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem (1123 bytes)
	I0708 13:04:58.265388    4087 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/key.pem (1675 bytes)
	I0708 13:04:58.265441    4087 certs.go:484] found cert: /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem (1708 bytes)
	I0708 13:04:58.265808    4087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 13:04:58.273006    4087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0708 13:04:58.279361    4087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 13:04:58.285674    4087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 13:04:58.292645    4087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/stopped-upgrade-170000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0708 13:04:58.298830    4087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/stopped-upgrade-170000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0708 13:04:58.305390    4087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/stopped-upgrade-170000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 13:04:58.312672    4087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/stopped-upgrade-170000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0708 13:04:58.319005    4087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 13:04:58.325654    4087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/1767.pem --> /usr/share/ca-certificates/1767.pem (1338 bytes)
	I0708 13:04:58.332850    4087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/ssl/certs/17672.pem --> /usr/share/ca-certificates/17672.pem (1708 bytes)
	I0708 13:04:58.339581    4087 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 13:04:58.344396    4087 ssh_runner.go:195] Run: openssl version
	I0708 13:04:58.346194    4087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 13:04:58.349540    4087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 13:04:58.351186    4087 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  8 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0708 13:04:58.351207    4087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 13:04:58.352987    4087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 13:04:58.356081    4087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1767.pem && ln -fs /usr/share/ca-certificates/1767.pem /etc/ssl/certs/1767.pem"
	I0708 13:04:58.358793    4087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1767.pem
	I0708 13:04:58.360146    4087 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  8 19:34 /usr/share/ca-certificates/1767.pem
	I0708 13:04:58.360165    4087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1767.pem
	I0708 13:04:58.361908    4087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1767.pem /etc/ssl/certs/51391683.0"
	I0708 13:04:58.365215    4087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17672.pem && ln -fs /usr/share/ca-certificates/17672.pem /etc/ssl/certs/17672.pem"
	I0708 13:04:58.368442    4087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17672.pem
	I0708 13:04:58.369765    4087 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  8 19:34 /usr/share/ca-certificates/17672.pem
	I0708 13:04:58.369783    4087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17672.pem
	I0708 13:04:58.371530    4087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17672.pem /etc/ssl/certs/3ec20f2e.0"
	I0708 13:04:58.374449    4087 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0708 13:04:58.375978    4087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0708 13:04:58.378195    4087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0708 13:04:58.380215    4087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0708 13:04:58.382223    4087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0708 13:04:58.384018    4087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0708 13:04:58.385760    4087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0708 13:04:58.387609    4087 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-170000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50600 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-170000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0708 13:04:58.387679    4087 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0708 13:04:58.397508    4087 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0708 13:04:58.400573    4087 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0708 13:04:58.400580    4087 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0708 13:04:58.400582    4087 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0708 13:04:58.400604    4087 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0708 13:04:58.403270    4087 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0708 13:04:58.403560    4087 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-170000" does not appear in /Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 13:04:58.403653    4087 kubeconfig.go:62] /Users/jenkins/minikube-integration/19195-1270/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-170000" cluster setting kubeconfig missing "stopped-upgrade-170000" context setting]
	I0708 13:04:58.403845    4087 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/kubeconfig: {Name:mkd06393ca6fb9ad91b614216d70dbd8a552e45d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 13:04:58.404317    4087 kapi.go:59] client config for stopped-upgrade-170000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/stopped-upgrade-170000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/stopped-upgrade-170000/client.key", CAFile:"/Users/jenkins/minikube-integration/19195-1270/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10599f4f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0708 13:04:58.404754    4087 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0708 13:04:58.407303    4087 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-170000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0708 13:04:58.407307    4087 kubeadm.go:1154] stopping kube-system containers ...
	I0708 13:04:58.407342    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0708 13:04:58.418185    4087 docker.go:483] Stopping containers: [d192ae42697c 9693310828d2 fb1259fd60c1 7420b58631a6 aa9fa9821d3c 9744dceee4c2 367cf0bc5844 440f0ce24e45]
	I0708 13:04:58.418248    4087 ssh_runner.go:195] Run: docker stop d192ae42697c 9693310828d2 fb1259fd60c1 7420b58631a6 aa9fa9821d3c 9744dceee4c2 367cf0bc5844 440f0ce24e45
	I0708 13:04:58.428882    4087 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0708 13:04:58.434143    4087 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 13:04:58.437454    4087 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 13:04:58.437465    4087 kubeadm.go:156] found existing configuration files:
	
	I0708 13:04:58.437487    4087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50600 /etc/kubernetes/admin.conf
	I0708 13:04:58.440132    4087 kubeadm.go:162] "https://control-plane.minikube.internal:50600" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50600 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 13:04:58.440152    4087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 13:04:58.442700    4087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50600 /etc/kubernetes/kubelet.conf
	I0708 13:04:58.445637    4087 kubeadm.go:162] "https://control-plane.minikube.internal:50600" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50600 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 13:04:58.445660    4087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 13:04:58.448564    4087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50600 /etc/kubernetes/controller-manager.conf
	I0708 13:04:58.451276    4087 kubeadm.go:162] "https://control-plane.minikube.internal:50600" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50600 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 13:04:58.451296    4087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 13:04:58.454119    4087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50600 /etc/kubernetes/scheduler.conf
	I0708 13:04:58.457118    4087 kubeadm.go:162] "https://control-plane.minikube.internal:50600" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50600 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 13:04:58.457142    4087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 13:04:58.460026    4087 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 13:04:58.462828    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 13:04:58.485016    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 13:04:58.856750    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0708 13:04:58.992346    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0708 13:04:59.015541    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0708 13:04:59.042426    4087 api_server.go:52] waiting for apiserver process to appear ...
	I0708 13:04:59.042500    4087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 13:04:59.544680    4087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 13:05:00.044540    4087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 13:05:00.049009    4087 api_server.go:72] duration metric: took 1.006613s to wait for apiserver process to appear ...
	I0708 13:05:00.049022    4087 api_server.go:88] waiting for apiserver healthz status ...
	I0708 13:05:00.049030    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:05:05.051009    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:05:05.051049    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:05:10.051713    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:05:10.051760    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:05:15.051923    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:05:15.051960    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:05:20.052348    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:05:20.052369    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:05:25.053009    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:05:25.053078    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:05:30.054149    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:05:30.054187    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:05:35.055340    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:05:35.055389    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:05:40.056950    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:05:40.056972    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:05:45.058805    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:05:45.058852    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:05:50.061046    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:05:50.061093    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:05:55.063253    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:05:55.063293    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:06:00.064411    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:06:00.064508    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:06:00.075503    4087 logs.go:276] 2 containers: [6ea05f4d18cc 7420b58631a6]
	I0708 13:06:00.075581    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:06:00.086578    4087 logs.go:276] 2 containers: [1e89e3203798 9693310828d2]
	I0708 13:06:00.086652    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:06:00.096540    4087 logs.go:276] 1 containers: [98fa118fd098]
	I0708 13:06:00.096616    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:06:00.107033    4087 logs.go:276] 2 containers: [6dbdf148a964 d192ae42697c]
	I0708 13:06:00.107099    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:06:00.117721    4087 logs.go:276] 1 containers: [750b11fad6e2]
	I0708 13:06:00.117789    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:06:00.128010    4087 logs.go:276] 2 containers: [e8da15772873 fb1259fd60c1]
	I0708 13:06:00.128084    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:06:00.138234    4087 logs.go:276] 0 containers: []
	W0708 13:06:00.138247    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:06:00.138306    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:06:00.149516    4087 logs.go:276] 2 containers: [7d824b616b14 514c8e511812]
	I0708 13:06:00.149538    4087 logs.go:123] Gathering logs for storage-provisioner [7d824b616b14] ...
	I0708 13:06:00.149544    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d824b616b14"
	I0708 13:06:00.162169    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:06:00.162186    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:06:00.275034    4087 logs.go:123] Gathering logs for coredns [98fa118fd098] ...
	I0708 13:06:00.275047    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98fa118fd098"
	I0708 13:06:00.286255    4087 logs.go:123] Gathering logs for kube-proxy [750b11fad6e2] ...
	I0708 13:06:00.286268    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750b11fad6e2"
	I0708 13:06:00.298674    4087 logs.go:123] Gathering logs for kube-scheduler [d192ae42697c] ...
	I0708 13:06:00.298686    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d192ae42697c"
	I0708 13:06:00.314250    4087 logs.go:123] Gathering logs for kube-controller-manager [e8da15772873] ...
	I0708 13:06:00.314263    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8da15772873"
	I0708 13:06:00.331728    4087 logs.go:123] Gathering logs for kube-controller-manager [fb1259fd60c1] ...
	I0708 13:06:00.331739    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1259fd60c1"
	I0708 13:06:00.346437    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:06:00.346448    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:06:00.372516    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:06:00.372524    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:06:00.384523    4087 logs.go:123] Gathering logs for etcd [1e89e3203798] ...
	I0708 13:06:00.384534    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e89e3203798"
	I0708 13:06:00.398221    4087 logs.go:123] Gathering logs for etcd [9693310828d2] ...
	I0708 13:06:00.398231    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9693310828d2"
	I0708 13:06:00.413487    4087 logs.go:123] Gathering logs for kube-scheduler [6dbdf148a964] ...
	I0708 13:06:00.413501    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbdf148a964"
	I0708 13:06:00.425179    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:06:00.425193    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:06:00.429786    4087 logs.go:123] Gathering logs for storage-provisioner [514c8e511812] ...
	I0708 13:06:00.429794    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514c8e511812"
	I0708 13:06:00.441356    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:06:00.441368    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:06:00.481337    4087 logs.go:123] Gathering logs for kube-apiserver [6ea05f4d18cc] ...
	I0708 13:06:00.481345    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea05f4d18cc"
	I0708 13:06:00.495675    4087 logs.go:123] Gathering logs for kube-apiserver [7420b58631a6] ...
	I0708 13:06:00.495688    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7420b58631a6"
	I0708 13:06:03.023428    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:06:08.025598    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:06:08.025754    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:06:08.037923    4087 logs.go:276] 2 containers: [6ea05f4d18cc 7420b58631a6]
	I0708 13:06:08.037997    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:06:08.049326    4087 logs.go:276] 2 containers: [1e89e3203798 9693310828d2]
	I0708 13:06:08.049401    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:06:08.068651    4087 logs.go:276] 1 containers: [98fa118fd098]
	I0708 13:06:08.068726    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:06:08.079357    4087 logs.go:276] 2 containers: [6dbdf148a964 d192ae42697c]
	I0708 13:06:08.079435    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:06:08.089582    4087 logs.go:276] 1 containers: [750b11fad6e2]
	I0708 13:06:08.089664    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:06:08.100548    4087 logs.go:276] 2 containers: [e8da15772873 fb1259fd60c1]
	I0708 13:06:08.100615    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:06:08.110244    4087 logs.go:276] 0 containers: []
	W0708 13:06:08.110256    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:06:08.110330    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:06:08.120756    4087 logs.go:276] 2 containers: [7d824b616b14 514c8e511812]
	I0708 13:06:08.120774    4087 logs.go:123] Gathering logs for kube-apiserver [7420b58631a6] ...
	I0708 13:06:08.120778    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7420b58631a6"
	I0708 13:06:08.146873    4087 logs.go:123] Gathering logs for etcd [9693310828d2] ...
	I0708 13:06:08.146887    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9693310828d2"
	I0708 13:06:08.161077    4087 logs.go:123] Gathering logs for kube-scheduler [6dbdf148a964] ...
	I0708 13:06:08.161093    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbdf148a964"
	I0708 13:06:08.173478    4087 logs.go:123] Gathering logs for kube-proxy [750b11fad6e2] ...
	I0708 13:06:08.173489    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750b11fad6e2"
	I0708 13:06:08.186076    4087 logs.go:123] Gathering logs for storage-provisioner [514c8e511812] ...
	I0708 13:06:08.186089    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514c8e511812"
	I0708 13:06:08.197993    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:06:08.198005    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:06:08.210685    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:06:08.210697    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:06:08.248761    4087 logs.go:123] Gathering logs for kube-apiserver [6ea05f4d18cc] ...
	I0708 13:06:08.248770    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea05f4d18cc"
	I0708 13:06:08.263766    4087 logs.go:123] Gathering logs for kube-scheduler [d192ae42697c] ...
	I0708 13:06:08.263777    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d192ae42697c"
	I0708 13:06:08.278794    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:06:08.278803    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:06:08.305211    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:06:08.305219    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:06:08.345162    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:06:08.345172    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:06:08.349646    4087 logs.go:123] Gathering logs for etcd [1e89e3203798] ...
	I0708 13:06:08.349651    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e89e3203798"
	I0708 13:06:08.363272    4087 logs.go:123] Gathering logs for coredns [98fa118fd098] ...
	I0708 13:06:08.363284    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98fa118fd098"
	I0708 13:06:08.378875    4087 logs.go:123] Gathering logs for kube-controller-manager [e8da15772873] ...
	I0708 13:06:08.378889    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8da15772873"
	I0708 13:06:08.397074    4087 logs.go:123] Gathering logs for kube-controller-manager [fb1259fd60c1] ...
	I0708 13:06:08.397087    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1259fd60c1"
	I0708 13:06:08.410655    4087 logs.go:123] Gathering logs for storage-provisioner [7d824b616b14] ...
	I0708 13:06:08.410666    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d824b616b14"
	I0708 13:06:10.927885    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:06:15.928154    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:06:15.928335    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:06:15.939755    4087 logs.go:276] 2 containers: [6ea05f4d18cc 7420b58631a6]
	I0708 13:06:15.939836    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:06:15.952398    4087 logs.go:276] 2 containers: [1e89e3203798 9693310828d2]
	I0708 13:06:15.952482    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:06:15.963390    4087 logs.go:276] 1 containers: [98fa118fd098]
	I0708 13:06:15.963464    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:06:15.974166    4087 logs.go:276] 2 containers: [6dbdf148a964 d192ae42697c]
	I0708 13:06:15.974232    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:06:15.996192    4087 logs.go:276] 1 containers: [750b11fad6e2]
	I0708 13:06:15.996267    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:06:16.006877    4087 logs.go:276] 2 containers: [e8da15772873 fb1259fd60c1]
	I0708 13:06:16.006947    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:06:16.017140    4087 logs.go:276] 0 containers: []
	W0708 13:06:16.017151    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:06:16.017204    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:06:16.032261    4087 logs.go:276] 2 containers: [7d824b616b14 514c8e511812]
	I0708 13:06:16.032280    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:06:16.032285    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:06:16.045049    4087 logs.go:123] Gathering logs for kube-apiserver [6ea05f4d18cc] ...
	I0708 13:06:16.045059    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea05f4d18cc"
	I0708 13:06:16.064933    4087 logs.go:123] Gathering logs for etcd [1e89e3203798] ...
	I0708 13:06:16.064945    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e89e3203798"
	I0708 13:06:16.077969    4087 logs.go:123] Gathering logs for kube-controller-manager [e8da15772873] ...
	I0708 13:06:16.077982    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8da15772873"
	I0708 13:06:16.096299    4087 logs.go:123] Gathering logs for storage-provisioner [7d824b616b14] ...
	I0708 13:06:16.096309    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d824b616b14"
	I0708 13:06:16.107592    4087 logs.go:123] Gathering logs for storage-provisioner [514c8e511812] ...
	I0708 13:06:16.107607    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514c8e511812"
	I0708 13:06:16.118800    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:06:16.118812    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:06:16.143200    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:06:16.143209    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:06:16.185259    4087 logs.go:123] Gathering logs for etcd [9693310828d2] ...
	I0708 13:06:16.185273    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9693310828d2"
	I0708 13:06:16.199576    4087 logs.go:123] Gathering logs for kube-scheduler [6dbdf148a964] ...
	I0708 13:06:16.199589    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbdf148a964"
	I0708 13:06:16.211709    4087 logs.go:123] Gathering logs for kube-scheduler [d192ae42697c] ...
	I0708 13:06:16.211720    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d192ae42697c"
	I0708 13:06:16.226289    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:06:16.226298    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:06:16.266091    4087 logs.go:123] Gathering logs for kube-controller-manager [fb1259fd60c1] ...
	I0708 13:06:16.266100    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1259fd60c1"
	I0708 13:06:16.283250    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:06:16.283262    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:06:16.287794    4087 logs.go:123] Gathering logs for kube-apiserver [7420b58631a6] ...
	I0708 13:06:16.287803    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7420b58631a6"
	I0708 13:06:16.312530    4087 logs.go:123] Gathering logs for coredns [98fa118fd098] ...
	I0708 13:06:16.312544    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98fa118fd098"
	I0708 13:06:16.328255    4087 logs.go:123] Gathering logs for kube-proxy [750b11fad6e2] ...
	I0708 13:06:16.328267    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750b11fad6e2"
	I0708 13:06:18.841900    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:06:23.844131    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:06:23.844252    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:06:23.855112    4087 logs.go:276] 2 containers: [6ea05f4d18cc 7420b58631a6]
	I0708 13:06:23.855185    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:06:23.865767    4087 logs.go:276] 2 containers: [1e89e3203798 9693310828d2]
	I0708 13:06:23.865836    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:06:23.878101    4087 logs.go:276] 1 containers: [98fa118fd098]
	I0708 13:06:23.878165    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:06:23.889176    4087 logs.go:276] 2 containers: [6dbdf148a964 d192ae42697c]
	I0708 13:06:23.889244    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:06:23.900201    4087 logs.go:276] 1 containers: [750b11fad6e2]
	I0708 13:06:23.900276    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:06:23.914316    4087 logs.go:276] 2 containers: [e8da15772873 fb1259fd60c1]
	I0708 13:06:23.914385    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:06:23.932984    4087 logs.go:276] 0 containers: []
	W0708 13:06:23.932994    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:06:23.933051    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:06:23.943736    4087 logs.go:276] 2 containers: [7d824b616b14 514c8e511812]
	I0708 13:06:23.943754    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:06:23.943760    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:06:23.978976    4087 logs.go:123] Gathering logs for kube-apiserver [6ea05f4d18cc] ...
	I0708 13:06:23.978987    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea05f4d18cc"
	I0708 13:06:23.993752    4087 logs.go:123] Gathering logs for etcd [1e89e3203798] ...
	I0708 13:06:23.993763    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e89e3203798"
	I0708 13:06:24.007680    4087 logs.go:123] Gathering logs for kube-scheduler [d192ae42697c] ...
	I0708 13:06:24.007690    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d192ae42697c"
	I0708 13:06:24.035406    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:06:24.035417    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:06:24.073137    4087 logs.go:123] Gathering logs for kube-apiserver [7420b58631a6] ...
	I0708 13:06:24.073145    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7420b58631a6"
	I0708 13:06:24.098685    4087 logs.go:123] Gathering logs for etcd [9693310828d2] ...
	I0708 13:06:24.098694    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9693310828d2"
	I0708 13:06:24.113409    4087 logs.go:123] Gathering logs for kube-scheduler [6dbdf148a964] ...
	I0708 13:06:24.113420    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbdf148a964"
	I0708 13:06:24.125140    4087 logs.go:123] Gathering logs for kube-proxy [750b11fad6e2] ...
	I0708 13:06:24.125156    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750b11fad6e2"
	I0708 13:06:24.137024    4087 logs.go:123] Gathering logs for kube-controller-manager [e8da15772873] ...
	I0708 13:06:24.137036    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8da15772873"
	I0708 13:06:24.154885    4087 logs.go:123] Gathering logs for kube-controller-manager [fb1259fd60c1] ...
	I0708 13:06:24.154895    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1259fd60c1"
	I0708 13:06:24.168478    4087 logs.go:123] Gathering logs for storage-provisioner [7d824b616b14] ...
	I0708 13:06:24.168488    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d824b616b14"
	I0708 13:06:24.184219    4087 logs.go:123] Gathering logs for storage-provisioner [514c8e511812] ...
	I0708 13:06:24.184228    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514c8e511812"
	I0708 13:06:24.196038    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:06:24.196054    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:06:24.208189    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:06:24.208200    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:06:24.212555    4087 logs.go:123] Gathering logs for coredns [98fa118fd098] ...
	I0708 13:06:24.212563    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98fa118fd098"
	I0708 13:06:24.223916    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:06:24.223928    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:06:26.751178    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:06:31.753450    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:06:31.753605    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:06:31.776249    4087 logs.go:276] 2 containers: [6ea05f4d18cc 7420b58631a6]
	I0708 13:06:31.776327    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:06:31.788835    4087 logs.go:276] 2 containers: [1e89e3203798 9693310828d2]
	I0708 13:06:31.788907    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:06:31.804341    4087 logs.go:276] 1 containers: [98fa118fd098]
	I0708 13:06:31.804403    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:06:31.814911    4087 logs.go:276] 2 containers: [6dbdf148a964 d192ae42697c]
	I0708 13:06:31.814982    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:06:31.829767    4087 logs.go:276] 1 containers: [750b11fad6e2]
	I0708 13:06:31.829836    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:06:31.840516    4087 logs.go:276] 2 containers: [e8da15772873 fb1259fd60c1]
	I0708 13:06:31.840588    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:06:31.851942    4087 logs.go:276] 0 containers: []
	W0708 13:06:31.851952    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:06:31.852008    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:06:31.862292    4087 logs.go:276] 2 containers: [7d824b616b14 514c8e511812]
	I0708 13:06:31.862310    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:06:31.862315    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:06:31.866914    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:06:31.866920    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:06:31.905542    4087 logs.go:123] Gathering logs for kube-apiserver [6ea05f4d18cc] ...
	I0708 13:06:31.905553    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea05f4d18cc"
	I0708 13:06:31.920026    4087 logs.go:123] Gathering logs for storage-provisioner [7d824b616b14] ...
	I0708 13:06:31.920035    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d824b616b14"
	I0708 13:06:31.932828    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:06:31.932838    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:06:31.972360    4087 logs.go:123] Gathering logs for etcd [9693310828d2] ...
	I0708 13:06:31.972368    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9693310828d2"
	I0708 13:06:31.986182    4087 logs.go:123] Gathering logs for kube-controller-manager [e8da15772873] ...
	I0708 13:06:31.986192    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8da15772873"
	I0708 13:06:32.003839    4087 logs.go:123] Gathering logs for coredns [98fa118fd098] ...
	I0708 13:06:32.003853    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98fa118fd098"
	I0708 13:06:32.021720    4087 logs.go:123] Gathering logs for kube-scheduler [6dbdf148a964] ...
	I0708 13:06:32.021736    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbdf148a964"
	I0708 13:06:32.035112    4087 logs.go:123] Gathering logs for kube-scheduler [d192ae42697c] ...
	I0708 13:06:32.035125    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d192ae42697c"
	I0708 13:06:32.049883    4087 logs.go:123] Gathering logs for kube-proxy [750b11fad6e2] ...
	I0708 13:06:32.049894    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750b11fad6e2"
	I0708 13:06:32.066702    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:06:32.066714    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:06:32.091186    4087 logs.go:123] Gathering logs for kube-apiserver [7420b58631a6] ...
	I0708 13:06:32.091194    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7420b58631a6"
	I0708 13:06:32.115938    4087 logs.go:123] Gathering logs for etcd [1e89e3203798] ...
	I0708 13:06:32.115949    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e89e3203798"
	I0708 13:06:32.129823    4087 logs.go:123] Gathering logs for kube-controller-manager [fb1259fd60c1] ...
	I0708 13:06:32.129834    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1259fd60c1"
	I0708 13:06:32.143950    4087 logs.go:123] Gathering logs for storage-provisioner [514c8e511812] ...
	I0708 13:06:32.143961    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514c8e511812"
	I0708 13:06:32.154994    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:06:32.155010    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:06:34.669010    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:06:39.671099    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:06:39.671199    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:06:39.683985    4087 logs.go:276] 2 containers: [6ea05f4d18cc 7420b58631a6]
	I0708 13:06:39.684062    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:06:39.695542    4087 logs.go:276] 2 containers: [1e89e3203798 9693310828d2]
	I0708 13:06:39.695615    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:06:39.705829    4087 logs.go:276] 1 containers: [98fa118fd098]
	I0708 13:06:39.705906    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:06:39.716633    4087 logs.go:276] 2 containers: [6dbdf148a964 d192ae42697c]
	I0708 13:06:39.716701    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:06:39.726743    4087 logs.go:276] 1 containers: [750b11fad6e2]
	I0708 13:06:39.726808    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:06:39.737563    4087 logs.go:276] 2 containers: [e8da15772873 fb1259fd60c1]
	I0708 13:06:39.737628    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:06:39.747551    4087 logs.go:276] 0 containers: []
	W0708 13:06:39.747564    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:06:39.747627    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:06:39.758357    4087 logs.go:276] 2 containers: [7d824b616b14 514c8e511812]
	I0708 13:06:39.758382    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:06:39.758387    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:06:39.762865    4087 logs.go:123] Gathering logs for kube-apiserver [7420b58631a6] ...
	I0708 13:06:39.762871    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7420b58631a6"
	I0708 13:06:39.788444    4087 logs.go:123] Gathering logs for kube-scheduler [d192ae42697c] ...
	I0708 13:06:39.788455    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d192ae42697c"
	I0708 13:06:39.803845    4087 logs.go:123] Gathering logs for storage-provisioner [7d824b616b14] ...
	I0708 13:06:39.803856    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d824b616b14"
	I0708 13:06:39.815780    4087 logs.go:123] Gathering logs for coredns [98fa118fd098] ...
	I0708 13:06:39.815792    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98fa118fd098"
	I0708 13:06:39.826928    4087 logs.go:123] Gathering logs for kube-scheduler [6dbdf148a964] ...
	I0708 13:06:39.826942    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbdf148a964"
	I0708 13:06:39.844704    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:06:39.844714    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:06:39.869124    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:06:39.869133    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:06:39.882664    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:06:39.882675    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:06:39.919675    4087 logs.go:123] Gathering logs for etcd [9693310828d2] ...
	I0708 13:06:39.919691    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9693310828d2"
	I0708 13:06:39.934427    4087 logs.go:123] Gathering logs for kube-proxy [750b11fad6e2] ...
	I0708 13:06:39.934436    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750b11fad6e2"
	I0708 13:06:39.946645    4087 logs.go:123] Gathering logs for kube-controller-manager [e8da15772873] ...
	I0708 13:06:39.946659    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8da15772873"
	I0708 13:06:39.965201    4087 logs.go:123] Gathering logs for kube-controller-manager [fb1259fd60c1] ...
	I0708 13:06:39.965217    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1259fd60c1"
	I0708 13:06:39.979849    4087 logs.go:123] Gathering logs for storage-provisioner [514c8e511812] ...
	I0708 13:06:39.979860    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514c8e511812"
	I0708 13:06:39.991670    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:06:39.991680    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:06:40.028384    4087 logs.go:123] Gathering logs for kube-apiserver [6ea05f4d18cc] ...
	I0708 13:06:40.028392    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea05f4d18cc"
	I0708 13:06:40.048341    4087 logs.go:123] Gathering logs for etcd [1e89e3203798] ...
	I0708 13:06:40.048351    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e89e3203798"
	I0708 13:06:42.563163    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:06:47.565317    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:06:47.565495    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:06:47.583176    4087 logs.go:276] 2 containers: [6ea05f4d18cc 7420b58631a6]
	I0708 13:06:47.583274    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:06:47.597135    4087 logs.go:276] 2 containers: [1e89e3203798 9693310828d2]
	I0708 13:06:47.597207    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:06:47.608891    4087 logs.go:276] 1 containers: [98fa118fd098]
	I0708 13:06:47.608964    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:06:47.626018    4087 logs.go:276] 2 containers: [6dbdf148a964 d192ae42697c]
	I0708 13:06:47.626086    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:06:47.636601    4087 logs.go:276] 1 containers: [750b11fad6e2]
	I0708 13:06:47.636675    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:06:47.647274    4087 logs.go:276] 2 containers: [e8da15772873 fb1259fd60c1]
	I0708 13:06:47.647343    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:06:47.660802    4087 logs.go:276] 0 containers: []
	W0708 13:06:47.660815    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:06:47.660871    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:06:47.671376    4087 logs.go:276] 2 containers: [7d824b616b14 514c8e511812]
	I0708 13:06:47.671398    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:06:47.671404    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:06:47.675590    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:06:47.675597    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:06:47.686782    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:06:47.686794    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:06:47.724322    4087 logs.go:123] Gathering logs for kube-controller-manager [e8da15772873] ...
	I0708 13:06:47.724333    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8da15772873"
	I0708 13:06:47.741665    4087 logs.go:123] Gathering logs for storage-provisioner [7d824b616b14] ...
	I0708 13:06:47.741675    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d824b616b14"
	I0708 13:06:47.756641    4087 logs.go:123] Gathering logs for kube-proxy [750b11fad6e2] ...
	I0708 13:06:47.756653    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750b11fad6e2"
	I0708 13:06:47.768093    4087 logs.go:123] Gathering logs for kube-controller-manager [fb1259fd60c1] ...
	I0708 13:06:47.768103    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1259fd60c1"
	I0708 13:06:47.781860    4087 logs.go:123] Gathering logs for kube-apiserver [6ea05f4d18cc] ...
	I0708 13:06:47.781872    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea05f4d18cc"
	I0708 13:06:47.795894    4087 logs.go:123] Gathering logs for kube-apiserver [7420b58631a6] ...
	I0708 13:06:47.795904    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7420b58631a6"
	I0708 13:06:47.821156    4087 logs.go:123] Gathering logs for etcd [1e89e3203798] ...
	I0708 13:06:47.821168    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e89e3203798"
	I0708 13:06:47.838145    4087 logs.go:123] Gathering logs for etcd [9693310828d2] ...
	I0708 13:06:47.838156    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9693310828d2"
	I0708 13:06:47.852453    4087 logs.go:123] Gathering logs for kube-scheduler [6dbdf148a964] ...
	I0708 13:06:47.852469    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbdf148a964"
	I0708 13:06:47.868106    4087 logs.go:123] Gathering logs for kube-scheduler [d192ae42697c] ...
	I0708 13:06:47.868120    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d192ae42697c"
	I0708 13:06:47.884935    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:06:47.884945    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:06:47.923461    4087 logs.go:123] Gathering logs for coredns [98fa118fd098] ...
	I0708 13:06:47.923472    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98fa118fd098"
	I0708 13:06:47.934660    4087 logs.go:123] Gathering logs for storage-provisioner [514c8e511812] ...
	I0708 13:06:47.934671    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514c8e511812"
	I0708 13:06:47.946755    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:06:47.946766    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:06:50.472552    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:06:55.474746    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:06:55.474933    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:06:55.494049    4087 logs.go:276] 2 containers: [6ea05f4d18cc 7420b58631a6]
	I0708 13:06:55.494141    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:06:55.509137    4087 logs.go:276] 2 containers: [1e89e3203798 9693310828d2]
	I0708 13:06:55.509215    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:06:55.521768    4087 logs.go:276] 1 containers: [98fa118fd098]
	I0708 13:06:55.521833    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:06:55.532671    4087 logs.go:276] 2 containers: [6dbdf148a964 d192ae42697c]
	I0708 13:06:55.532734    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:06:55.543457    4087 logs.go:276] 1 containers: [750b11fad6e2]
	I0708 13:06:55.543529    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:06:55.554143    4087 logs.go:276] 2 containers: [e8da15772873 fb1259fd60c1]
	I0708 13:06:55.554206    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:06:55.565152    4087 logs.go:276] 0 containers: []
	W0708 13:06:55.565167    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:06:55.565225    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:06:55.576207    4087 logs.go:276] 2 containers: [7d824b616b14 514c8e511812]
	I0708 13:06:55.576228    4087 logs.go:123] Gathering logs for kube-apiserver [7420b58631a6] ...
	I0708 13:06:55.576233    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7420b58631a6"
	I0708 13:06:55.605649    4087 logs.go:123] Gathering logs for coredns [98fa118fd098] ...
	I0708 13:06:55.605662    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98fa118fd098"
	I0708 13:06:55.617133    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:06:55.617146    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:06:55.621544    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:06:55.621553    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:06:55.659230    4087 logs.go:123] Gathering logs for kube-proxy [750b11fad6e2] ...
	I0708 13:06:55.659242    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750b11fad6e2"
	I0708 13:06:55.675280    4087 logs.go:123] Gathering logs for storage-provisioner [7d824b616b14] ...
	I0708 13:06:55.675291    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d824b616b14"
	I0708 13:06:55.687066    4087 logs.go:123] Gathering logs for etcd [1e89e3203798] ...
	I0708 13:06:55.687078    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e89e3203798"
	I0708 13:06:55.700702    4087 logs.go:123] Gathering logs for etcd [9693310828d2] ...
	I0708 13:06:55.700713    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9693310828d2"
	I0708 13:06:55.715183    4087 logs.go:123] Gathering logs for kube-scheduler [6dbdf148a964] ...
	I0708 13:06:55.715194    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbdf148a964"
	I0708 13:06:55.727037    4087 logs.go:123] Gathering logs for kube-scheduler [d192ae42697c] ...
	I0708 13:06:55.727047    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d192ae42697c"
	I0708 13:06:55.741655    4087 logs.go:123] Gathering logs for kube-controller-manager [e8da15772873] ...
	I0708 13:06:55.741667    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8da15772873"
	I0708 13:06:55.762767    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:06:55.762777    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:06:55.787175    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:06:55.787185    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:06:55.827246    4087 logs.go:123] Gathering logs for kube-apiserver [6ea05f4d18cc] ...
	I0708 13:06:55.827259    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea05f4d18cc"
	I0708 13:06:55.842454    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:06:55.842465    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:06:55.854225    4087 logs.go:123] Gathering logs for kube-controller-manager [fb1259fd60c1] ...
	I0708 13:06:55.854237    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1259fd60c1"
	I0708 13:06:55.872315    4087 logs.go:123] Gathering logs for storage-provisioner [514c8e511812] ...
	I0708 13:06:55.872327    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514c8e511812"
	I0708 13:06:58.385713    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:07:03.387829    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:07:03.387989    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:07:03.399968    4087 logs.go:276] 2 containers: [6ea05f4d18cc 7420b58631a6]
	I0708 13:07:03.400052    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:07:03.410697    4087 logs.go:276] 2 containers: [1e89e3203798 9693310828d2]
	I0708 13:07:03.410769    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:07:03.421246    4087 logs.go:276] 1 containers: [98fa118fd098]
	I0708 13:07:03.421313    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:07:03.431794    4087 logs.go:276] 2 containers: [6dbdf148a964 d192ae42697c]
	I0708 13:07:03.431872    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:07:03.442459    4087 logs.go:276] 1 containers: [750b11fad6e2]
	I0708 13:07:03.442519    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:07:03.452822    4087 logs.go:276] 2 containers: [e8da15772873 fb1259fd60c1]
	I0708 13:07:03.452888    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:07:03.462633    4087 logs.go:276] 0 containers: []
	W0708 13:07:03.462646    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:07:03.462713    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:07:03.473344    4087 logs.go:276] 2 containers: [7d824b616b14 514c8e511812]
	I0708 13:07:03.473362    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:07:03.473367    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:07:03.485471    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:07:03.485482    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:07:03.490038    4087 logs.go:123] Gathering logs for coredns [98fa118fd098] ...
	I0708 13:07:03.490044    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98fa118fd098"
	I0708 13:07:03.501304    4087 logs.go:123] Gathering logs for kube-controller-manager [fb1259fd60c1] ...
	I0708 13:07:03.501317    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1259fd60c1"
	I0708 13:07:03.515449    4087 logs.go:123] Gathering logs for storage-provisioner [7d824b616b14] ...
	I0708 13:07:03.515460    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d824b616b14"
	I0708 13:07:03.527349    4087 logs.go:123] Gathering logs for kube-scheduler [d192ae42697c] ...
	I0708 13:07:03.527359    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d192ae42697c"
	I0708 13:07:03.565540    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:07:03.565551    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:07:03.590748    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:07:03.590756    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:07:03.630232    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:07:03.630244    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:07:03.672838    4087 logs.go:123] Gathering logs for kube-apiserver [7420b58631a6] ...
	I0708 13:07:03.672848    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7420b58631a6"
	I0708 13:07:03.697425    4087 logs.go:123] Gathering logs for kube-scheduler [6dbdf148a964] ...
	I0708 13:07:03.697440    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbdf148a964"
	I0708 13:07:03.709294    4087 logs.go:123] Gathering logs for etcd [1e89e3203798] ...
	I0708 13:07:03.709305    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e89e3203798"
	I0708 13:07:03.723261    4087 logs.go:123] Gathering logs for kube-proxy [750b11fad6e2] ...
	I0708 13:07:03.723271    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750b11fad6e2"
	I0708 13:07:03.735069    4087 logs.go:123] Gathering logs for kube-apiserver [6ea05f4d18cc] ...
	I0708 13:07:03.735080    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea05f4d18cc"
	I0708 13:07:03.750096    4087 logs.go:123] Gathering logs for etcd [9693310828d2] ...
	I0708 13:07:03.750107    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9693310828d2"
	I0708 13:07:03.764773    4087 logs.go:123] Gathering logs for kube-controller-manager [e8da15772873] ...
	I0708 13:07:03.764782    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8da15772873"
	I0708 13:07:03.781684    4087 logs.go:123] Gathering logs for storage-provisioner [514c8e511812] ...
	I0708 13:07:03.781695    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514c8e511812"
	I0708 13:07:06.295318    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:07:11.297168    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:07:11.297399    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:07:11.311352    4087 logs.go:276] 2 containers: [6ea05f4d18cc 7420b58631a6]
	I0708 13:07:11.311430    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:07:11.323177    4087 logs.go:276] 2 containers: [1e89e3203798 9693310828d2]
	I0708 13:07:11.323246    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:07:11.333564    4087 logs.go:276] 1 containers: [98fa118fd098]
	I0708 13:07:11.333634    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:07:11.347907    4087 logs.go:276] 2 containers: [6dbdf148a964 d192ae42697c]
	I0708 13:07:11.347979    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:07:11.358261    4087 logs.go:276] 1 containers: [750b11fad6e2]
	I0708 13:07:11.358333    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:07:11.368483    4087 logs.go:276] 2 containers: [e8da15772873 fb1259fd60c1]
	I0708 13:07:11.368557    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:07:11.378990    4087 logs.go:276] 0 containers: []
	W0708 13:07:11.379000    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:07:11.379058    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:07:11.390803    4087 logs.go:276] 2 containers: [7d824b616b14 514c8e511812]
	I0708 13:07:11.390821    4087 logs.go:123] Gathering logs for kube-apiserver [7420b58631a6] ...
	I0708 13:07:11.390826    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7420b58631a6"
	I0708 13:07:11.415215    4087 logs.go:123] Gathering logs for etcd [9693310828d2] ...
	I0708 13:07:11.415227    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9693310828d2"
	I0708 13:07:11.429511    4087 logs.go:123] Gathering logs for coredns [98fa118fd098] ...
	I0708 13:07:11.429521    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98fa118fd098"
	I0708 13:07:11.440898    4087 logs.go:123] Gathering logs for etcd [1e89e3203798] ...
	I0708 13:07:11.440911    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e89e3203798"
	I0708 13:07:11.454759    4087 logs.go:123] Gathering logs for kube-scheduler [d192ae42697c] ...
	I0708 13:07:11.454773    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d192ae42697c"
	I0708 13:07:11.469455    4087 logs.go:123] Gathering logs for storage-provisioner [7d824b616b14] ...
	I0708 13:07:11.469465    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d824b616b14"
	I0708 13:07:11.480830    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:07:11.480844    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:07:11.504506    4087 logs.go:123] Gathering logs for kube-controller-manager [fb1259fd60c1] ...
	I0708 13:07:11.504514    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1259fd60c1"
	I0708 13:07:11.518265    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:07:11.518279    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:07:11.556985    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:07:11.557001    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:07:11.561532    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:07:11.561539    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:07:11.597348    4087 logs.go:123] Gathering logs for kube-apiserver [6ea05f4d18cc] ...
	I0708 13:07:11.597362    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea05f4d18cc"
	I0708 13:07:11.611655    4087 logs.go:123] Gathering logs for kube-scheduler [6dbdf148a964] ...
	I0708 13:07:11.611665    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbdf148a964"
	I0708 13:07:11.623361    4087 logs.go:123] Gathering logs for kube-proxy [750b11fad6e2] ...
	I0708 13:07:11.623373    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750b11fad6e2"
	I0708 13:07:11.635658    4087 logs.go:123] Gathering logs for kube-controller-manager [e8da15772873] ...
	I0708 13:07:11.635667    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8da15772873"
	I0708 13:07:11.653538    4087 logs.go:123] Gathering logs for storage-provisioner [514c8e511812] ...
	I0708 13:07:11.653549    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514c8e511812"
	I0708 13:07:11.664617    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:07:11.664628    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:07:14.178482    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:07:19.180738    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:07:19.180874    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:07:19.200714    4087 logs.go:276] 2 containers: [6ea05f4d18cc 7420b58631a6]
	I0708 13:07:19.200805    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:07:19.214815    4087 logs.go:276] 2 containers: [1e89e3203798 9693310828d2]
	I0708 13:07:19.214892    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:07:19.227690    4087 logs.go:276] 1 containers: [98fa118fd098]
	I0708 13:07:19.227750    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:07:19.243583    4087 logs.go:276] 2 containers: [6dbdf148a964 d192ae42697c]
	I0708 13:07:19.243652    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:07:19.253907    4087 logs.go:276] 1 containers: [750b11fad6e2]
	I0708 13:07:19.253976    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:07:19.264190    4087 logs.go:276] 2 containers: [e8da15772873 fb1259fd60c1]
	I0708 13:07:19.264258    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:07:19.274662    4087 logs.go:276] 0 containers: []
	W0708 13:07:19.274678    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:07:19.274732    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:07:19.284934    4087 logs.go:276] 2 containers: [7d824b616b14 514c8e511812]
	I0708 13:07:19.284955    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:07:19.284961    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:07:19.320160    4087 logs.go:123] Gathering logs for etcd [1e89e3203798] ...
	I0708 13:07:19.320170    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e89e3203798"
	I0708 13:07:19.335308    4087 logs.go:123] Gathering logs for kube-proxy [750b11fad6e2] ...
	I0708 13:07:19.335318    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750b11fad6e2"
	I0708 13:07:19.347036    4087 logs.go:123] Gathering logs for kube-controller-manager [fb1259fd60c1] ...
	I0708 13:07:19.347047    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1259fd60c1"
	I0708 13:07:19.361381    4087 logs.go:123] Gathering logs for storage-provisioner [7d824b616b14] ...
	I0708 13:07:19.361393    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d824b616b14"
	I0708 13:07:19.372996    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:07:19.373005    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:07:19.388633    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:07:19.388646    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:07:19.428933    4087 logs.go:123] Gathering logs for kube-scheduler [6dbdf148a964] ...
	I0708 13:07:19.428941    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbdf148a964"
	I0708 13:07:19.440468    4087 logs.go:123] Gathering logs for kube-scheduler [d192ae42697c] ...
	I0708 13:07:19.440481    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d192ae42697c"
	I0708 13:07:19.455256    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:07:19.455268    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:07:19.479646    4087 logs.go:123] Gathering logs for kube-apiserver [6ea05f4d18cc] ...
	I0708 13:07:19.479654    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea05f4d18cc"
	I0708 13:07:19.493382    4087 logs.go:123] Gathering logs for kube-apiserver [7420b58631a6] ...
	I0708 13:07:19.493395    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7420b58631a6"
	I0708 13:07:19.525414    4087 logs.go:123] Gathering logs for etcd [9693310828d2] ...
	I0708 13:07:19.525426    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9693310828d2"
	I0708 13:07:19.539637    4087 logs.go:123] Gathering logs for coredns [98fa118fd098] ...
	I0708 13:07:19.539651    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98fa118fd098"
	I0708 13:07:19.550604    4087 logs.go:123] Gathering logs for kube-controller-manager [e8da15772873] ...
	I0708 13:07:19.550616    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8da15772873"
	I0708 13:07:19.568420    4087 logs.go:123] Gathering logs for storage-provisioner [514c8e511812] ...
	I0708 13:07:19.568430    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514c8e511812"
	I0708 13:07:19.579609    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:07:19.579620    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:07:22.085522    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:07:27.087889    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:07:27.088154    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:07:27.119995    4087 logs.go:276] 2 containers: [6ea05f4d18cc 7420b58631a6]
	I0708 13:07:27.120128    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:07:27.137358    4087 logs.go:276] 2 containers: [1e89e3203798 9693310828d2]
	I0708 13:07:27.137457    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:07:27.150201    4087 logs.go:276] 1 containers: [98fa118fd098]
	I0708 13:07:27.150276    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:07:27.163059    4087 logs.go:276] 2 containers: [6dbdf148a964 d192ae42697c]
	I0708 13:07:27.163132    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:07:27.173753    4087 logs.go:276] 1 containers: [750b11fad6e2]
	I0708 13:07:27.173822    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:07:27.184660    4087 logs.go:276] 2 containers: [e8da15772873 fb1259fd60c1]
	I0708 13:07:27.184722    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:07:27.194846    4087 logs.go:276] 0 containers: []
	W0708 13:07:27.194857    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:07:27.194914    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:07:27.205592    4087 logs.go:276] 2 containers: [7d824b616b14 514c8e511812]
	I0708 13:07:27.205612    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:07:27.205618    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:07:27.241211    4087 logs.go:123] Gathering logs for kube-scheduler [d192ae42697c] ...
	I0708 13:07:27.241222    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d192ae42697c"
	I0708 13:07:27.260019    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:07:27.260030    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:07:27.272988    4087 logs.go:123] Gathering logs for kube-controller-manager [fb1259fd60c1] ...
	I0708 13:07:27.273000    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1259fd60c1"
	I0708 13:07:27.287942    4087 logs.go:123] Gathering logs for storage-provisioner [7d824b616b14] ...
	I0708 13:07:27.287953    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d824b616b14"
	I0708 13:07:27.299626    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:07:27.299636    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:07:27.323813    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:07:27.323821    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:07:27.328154    4087 logs.go:123] Gathering logs for kube-scheduler [6dbdf148a964] ...
	I0708 13:07:27.328159    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbdf148a964"
	I0708 13:07:27.339982    4087 logs.go:123] Gathering logs for kube-proxy [750b11fad6e2] ...
	I0708 13:07:27.339992    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750b11fad6e2"
	I0708 13:07:27.351992    4087 logs.go:123] Gathering logs for kube-controller-manager [e8da15772873] ...
	I0708 13:07:27.352002    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8da15772873"
	I0708 13:07:27.371713    4087 logs.go:123] Gathering logs for storage-provisioner [514c8e511812] ...
	I0708 13:07:27.371723    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514c8e511812"
	I0708 13:07:27.384485    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:07:27.384497    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:07:27.424162    4087 logs.go:123] Gathering logs for kube-apiserver [6ea05f4d18cc] ...
	I0708 13:07:27.424173    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea05f4d18cc"
	I0708 13:07:27.441099    4087 logs.go:123] Gathering logs for etcd [1e89e3203798] ...
	I0708 13:07:27.441109    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e89e3203798"
	I0708 13:07:27.454788    4087 logs.go:123] Gathering logs for kube-apiserver [7420b58631a6] ...
	I0708 13:07:27.454799    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7420b58631a6"
	I0708 13:07:27.479856    4087 logs.go:123] Gathering logs for etcd [9693310828d2] ...
	I0708 13:07:27.479867    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9693310828d2"
	I0708 13:07:27.494253    4087 logs.go:123] Gathering logs for coredns [98fa118fd098] ...
	I0708 13:07:27.494262    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98fa118fd098"
	I0708 13:07:30.007223    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:07:35.009449    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:07:35.009590    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:07:35.024265    4087 logs.go:276] 2 containers: [6ea05f4d18cc 7420b58631a6]
	I0708 13:07:35.024341    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:07:35.037052    4087 logs.go:276] 2 containers: [1e89e3203798 9693310828d2]
	I0708 13:07:35.037130    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:07:35.047530    4087 logs.go:276] 1 containers: [98fa118fd098]
	I0708 13:07:35.047600    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:07:35.057867    4087 logs.go:276] 2 containers: [6dbdf148a964 d192ae42697c]
	I0708 13:07:35.057943    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:07:35.067768    4087 logs.go:276] 1 containers: [750b11fad6e2]
	I0708 13:07:35.067833    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:07:35.078275    4087 logs.go:276] 2 containers: [e8da15772873 fb1259fd60c1]
	I0708 13:07:35.078341    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:07:35.088380    4087 logs.go:276] 0 containers: []
	W0708 13:07:35.088394    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:07:35.088445    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:07:35.098977    4087 logs.go:276] 2 containers: [7d824b616b14 514c8e511812]
	I0708 13:07:35.098997    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:07:35.099003    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:07:35.138353    4087 logs.go:123] Gathering logs for etcd [1e89e3203798] ...
	I0708 13:07:35.138362    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e89e3203798"
	I0708 13:07:35.151703    4087 logs.go:123] Gathering logs for etcd [9693310828d2] ...
	I0708 13:07:35.151717    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9693310828d2"
	I0708 13:07:35.166078    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:07:35.166090    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:07:35.190121    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:07:35.190128    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:07:35.193854    4087 logs.go:123] Gathering logs for kube-scheduler [d192ae42697c] ...
	I0708 13:07:35.193862    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d192ae42697c"
	I0708 13:07:35.209024    4087 logs.go:123] Gathering logs for kube-proxy [750b11fad6e2] ...
	I0708 13:07:35.209035    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750b11fad6e2"
	I0708 13:07:35.221340    4087 logs.go:123] Gathering logs for storage-provisioner [514c8e511812] ...
	I0708 13:07:35.221353    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514c8e511812"
	I0708 13:07:35.235923    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:07:35.235936    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:07:35.251963    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:07:35.251973    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:07:35.286007    4087 logs.go:123] Gathering logs for kube-apiserver [6ea05f4d18cc] ...
	I0708 13:07:35.286019    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea05f4d18cc"
	I0708 13:07:35.304017    4087 logs.go:123] Gathering logs for kube-controller-manager [e8da15772873] ...
	I0708 13:07:35.304026    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8da15772873"
	I0708 13:07:35.321979    4087 logs.go:123] Gathering logs for kube-apiserver [7420b58631a6] ...
	I0708 13:07:35.321990    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7420b58631a6"
	I0708 13:07:35.352075    4087 logs.go:123] Gathering logs for coredns [98fa118fd098] ...
	I0708 13:07:35.352085    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98fa118fd098"
	I0708 13:07:35.362990    4087 logs.go:123] Gathering logs for kube-scheduler [6dbdf148a964] ...
	I0708 13:07:35.363002    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbdf148a964"
	I0708 13:07:35.374516    4087 logs.go:123] Gathering logs for kube-controller-manager [fb1259fd60c1] ...
	I0708 13:07:35.374526    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1259fd60c1"
	I0708 13:07:35.394067    4087 logs.go:123] Gathering logs for storage-provisioner [7d824b616b14] ...
	I0708 13:07:35.394079    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d824b616b14"
	I0708 13:07:37.907044    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:07:42.904903    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:07:42.905085    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:07:42.923218    4087 logs.go:276] 2 containers: [6ea05f4d18cc 7420b58631a6]
	I0708 13:07:42.923313    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:07:42.943983    4087 logs.go:276] 2 containers: [1e89e3203798 9693310828d2]
	I0708 13:07:42.944058    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:07:42.955311    4087 logs.go:276] 1 containers: [98fa118fd098]
	I0708 13:07:42.955385    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:07:42.965776    4087 logs.go:276] 2 containers: [6dbdf148a964 d192ae42697c]
	I0708 13:07:42.965845    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:07:42.975894    4087 logs.go:276] 1 containers: [750b11fad6e2]
	I0708 13:07:42.975965    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:07:42.986143    4087 logs.go:276] 2 containers: [e8da15772873 fb1259fd60c1]
	I0708 13:07:42.986215    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:07:42.995961    4087 logs.go:276] 0 containers: []
	W0708 13:07:42.995976    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:07:42.996036    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:07:43.006217    4087 logs.go:276] 2 containers: [7d824b616b14 514c8e511812]
	I0708 13:07:43.006232    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:07:43.006238    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:07:43.018060    4087 logs.go:123] Gathering logs for kube-apiserver [6ea05f4d18cc] ...
	I0708 13:07:43.018074    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea05f4d18cc"
	I0708 13:07:43.031779    4087 logs.go:123] Gathering logs for etcd [9693310828d2] ...
	I0708 13:07:43.031791    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9693310828d2"
	I0708 13:07:43.045645    4087 logs.go:123] Gathering logs for coredns [98fa118fd098] ...
	I0708 13:07:43.045659    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98fa118fd098"
	I0708 13:07:43.056373    4087 logs.go:123] Gathering logs for kube-controller-manager [e8da15772873] ...
	I0708 13:07:43.056388    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8da15772873"
	I0708 13:07:43.074696    4087 logs.go:123] Gathering logs for kube-controller-manager [fb1259fd60c1] ...
	I0708 13:07:43.074706    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1259fd60c1"
	I0708 13:07:43.092373    4087 logs.go:123] Gathering logs for storage-provisioner [514c8e511812] ...
	I0708 13:07:43.092387    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514c8e511812"
	I0708 13:07:43.103443    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:07:43.103454    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:07:43.142916    4087 logs.go:123] Gathering logs for kube-scheduler [d192ae42697c] ...
	I0708 13:07:43.142926    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d192ae42697c"
	I0708 13:07:43.157593    4087 logs.go:123] Gathering logs for storage-provisioner [7d824b616b14] ...
	I0708 13:07:43.157606    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d824b616b14"
	I0708 13:07:43.168721    4087 logs.go:123] Gathering logs for etcd [1e89e3203798] ...
	I0708 13:07:43.168733    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e89e3203798"
	I0708 13:07:43.182540    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:07:43.182550    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:07:43.218065    4087 logs.go:123] Gathering logs for kube-apiserver [7420b58631a6] ...
	I0708 13:07:43.218079    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7420b58631a6"
	I0708 13:07:43.243322    4087 logs.go:123] Gathering logs for kube-scheduler [6dbdf148a964] ...
	I0708 13:07:43.243337    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbdf148a964"
	I0708 13:07:43.256870    4087 logs.go:123] Gathering logs for kube-proxy [750b11fad6e2] ...
	I0708 13:07:43.256879    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750b11fad6e2"
	I0708 13:07:43.276565    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:07:43.276576    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:07:43.299830    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:07:43.299841    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:07:45.804229    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:07:50.803752    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:07:50.803955    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:07:50.816668    4087 logs.go:276] 2 containers: [6ea05f4d18cc 7420b58631a6]
	I0708 13:07:50.816752    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:07:50.827665    4087 logs.go:276] 2 containers: [1e89e3203798 9693310828d2]
	I0708 13:07:50.827745    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:07:50.838725    4087 logs.go:276] 1 containers: [98fa118fd098]
	I0708 13:07:50.838793    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:07:50.853724    4087 logs.go:276] 2 containers: [6dbdf148a964 d192ae42697c]
	I0708 13:07:50.853789    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:07:50.870878    4087 logs.go:276] 1 containers: [750b11fad6e2]
	I0708 13:07:50.870942    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:07:50.881233    4087 logs.go:276] 2 containers: [e8da15772873 fb1259fd60c1]
	I0708 13:07:50.881305    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:07:50.891547    4087 logs.go:276] 0 containers: []
	W0708 13:07:50.891560    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:07:50.891609    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:07:50.903291    4087 logs.go:276] 2 containers: [7d824b616b14 514c8e511812]
	I0708 13:07:50.903309    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:07:50.903315    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:07:50.941578    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:07:50.941593    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:07:50.945850    4087 logs.go:123] Gathering logs for storage-provisioner [514c8e511812] ...
	I0708 13:07:50.945857    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514c8e511812"
	I0708 13:07:50.957031    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:07:50.957047    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:07:50.981179    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:07:50.981188    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:07:51.019373    4087 logs.go:123] Gathering logs for kube-apiserver [7420b58631a6] ...
	I0708 13:07:51.019388    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7420b58631a6"
	I0708 13:07:51.044313    4087 logs.go:123] Gathering logs for etcd [1e89e3203798] ...
	I0708 13:07:51.044324    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e89e3203798"
	I0708 13:07:51.057751    4087 logs.go:123] Gathering logs for kube-scheduler [6dbdf148a964] ...
	I0708 13:07:51.057768    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbdf148a964"
	I0708 13:07:51.069813    4087 logs.go:123] Gathering logs for kube-scheduler [d192ae42697c] ...
	I0708 13:07:51.069825    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d192ae42697c"
	I0708 13:07:51.084343    4087 logs.go:123] Gathering logs for kube-proxy [750b11fad6e2] ...
	I0708 13:07:51.084354    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750b11fad6e2"
	I0708 13:07:51.096164    4087 logs.go:123] Gathering logs for kube-controller-manager [fb1259fd60c1] ...
	I0708 13:07:51.096174    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1259fd60c1"
	I0708 13:07:51.109925    4087 logs.go:123] Gathering logs for etcd [9693310828d2] ...
	I0708 13:07:51.109935    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9693310828d2"
	I0708 13:07:51.126907    4087 logs.go:123] Gathering logs for kube-controller-manager [e8da15772873] ...
	I0708 13:07:51.126923    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8da15772873"
	I0708 13:07:51.146097    4087 logs.go:123] Gathering logs for storage-provisioner [7d824b616b14] ...
	I0708 13:07:51.146112    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d824b616b14"
	I0708 13:07:51.158533    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:07:51.158544    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:07:51.170792    4087 logs.go:123] Gathering logs for kube-apiserver [6ea05f4d18cc] ...
	I0708 13:07:51.170804    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea05f4d18cc"
	I0708 13:07:51.185270    4087 logs.go:123] Gathering logs for coredns [98fa118fd098] ...
	I0708 13:07:51.185280    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98fa118fd098"
	I0708 13:07:53.697968    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:07:58.698517    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:07:58.698607    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:07:58.709878    4087 logs.go:276] 2 containers: [6ea05f4d18cc 7420b58631a6]
	I0708 13:07:58.709953    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:07:58.720649    4087 logs.go:276] 2 containers: [1e89e3203798 9693310828d2]
	I0708 13:07:58.720722    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:07:58.732014    4087 logs.go:276] 1 containers: [98fa118fd098]
	I0708 13:07:58.732082    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:07:58.742782    4087 logs.go:276] 2 containers: [6dbdf148a964 d192ae42697c]
	I0708 13:07:58.742858    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:07:58.753603    4087 logs.go:276] 1 containers: [750b11fad6e2]
	I0708 13:07:58.753676    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:07:58.773005    4087 logs.go:276] 2 containers: [e8da15772873 fb1259fd60c1]
	I0708 13:07:58.773075    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:07:58.783035    4087 logs.go:276] 0 containers: []
	W0708 13:07:58.783046    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:07:58.783106    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:07:58.793449    4087 logs.go:276] 2 containers: [7d824b616b14 514c8e511812]
	I0708 13:07:58.793468    4087 logs.go:123] Gathering logs for kube-apiserver [7420b58631a6] ...
	I0708 13:07:58.793474    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7420b58631a6"
	I0708 13:07:58.818082    4087 logs.go:123] Gathering logs for etcd [9693310828d2] ...
	I0708 13:07:58.818092    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9693310828d2"
	I0708 13:07:58.832850    4087 logs.go:123] Gathering logs for kube-controller-manager [fb1259fd60c1] ...
	I0708 13:07:58.832861    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1259fd60c1"
	I0708 13:07:58.846545    4087 logs.go:123] Gathering logs for storage-provisioner [7d824b616b14] ...
	I0708 13:07:58.846554    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d824b616b14"
	I0708 13:07:58.860868    4087 logs.go:123] Gathering logs for storage-provisioner [514c8e511812] ...
	I0708 13:07:58.860878    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514c8e511812"
	I0708 13:07:58.873084    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:07:58.873094    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:07:58.895982    4087 logs.go:123] Gathering logs for etcd [1e89e3203798] ...
	I0708 13:07:58.895992    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e89e3203798"
	I0708 13:07:58.915091    4087 logs.go:123] Gathering logs for kube-proxy [750b11fad6e2] ...
	I0708 13:07:58.915102    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750b11fad6e2"
	I0708 13:07:58.928282    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:07:58.928294    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:07:58.967681    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:07:58.967690    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:07:59.002564    4087 logs.go:123] Gathering logs for kube-apiserver [6ea05f4d18cc] ...
	I0708 13:07:59.002575    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea05f4d18cc"
	I0708 13:07:59.016526    4087 logs.go:123] Gathering logs for coredns [98fa118fd098] ...
	I0708 13:07:59.016537    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98fa118fd098"
	I0708 13:07:59.028798    4087 logs.go:123] Gathering logs for kube-scheduler [d192ae42697c] ...
	I0708 13:07:59.028809    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d192ae42697c"
	I0708 13:07:59.044052    4087 logs.go:123] Gathering logs for kube-controller-manager [e8da15772873] ...
	I0708 13:07:59.044062    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8da15772873"
	I0708 13:07:59.061043    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:07:59.061054    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:07:59.072593    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:07:59.072603    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:07:59.076778    4087 logs.go:123] Gathering logs for kube-scheduler [6dbdf148a964] ...
	I0708 13:07:59.076784    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbdf148a964"
	I0708 13:08:01.590007    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:08:06.591202    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:08:06.591420    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:08:06.610509    4087 logs.go:276] 2 containers: [6ea05f4d18cc 7420b58631a6]
	I0708 13:08:06.610608    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:08:06.628066    4087 logs.go:276] 2 containers: [1e89e3203798 9693310828d2]
	I0708 13:08:06.628136    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:08:06.639716    4087 logs.go:276] 1 containers: [98fa118fd098]
	I0708 13:08:06.639782    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:08:06.650625    4087 logs.go:276] 2 containers: [6dbdf148a964 d192ae42697c]
	I0708 13:08:06.650700    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:08:06.665806    4087 logs.go:276] 1 containers: [750b11fad6e2]
	I0708 13:08:06.665882    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:08:06.681945    4087 logs.go:276] 2 containers: [e8da15772873 fb1259fd60c1]
	I0708 13:08:06.682016    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:08:06.692267    4087 logs.go:276] 0 containers: []
	W0708 13:08:06.692278    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:08:06.692333    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:08:06.707193    4087 logs.go:276] 2 containers: [7d824b616b14 514c8e511812]
	I0708 13:08:06.707210    4087 logs.go:123] Gathering logs for kube-apiserver [6ea05f4d18cc] ...
	I0708 13:08:06.707215    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea05f4d18cc"
	I0708 13:08:06.721059    4087 logs.go:123] Gathering logs for kube-controller-manager [e8da15772873] ...
	I0708 13:08:06.721070    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8da15772873"
	I0708 13:08:06.742425    4087 logs.go:123] Gathering logs for storage-provisioner [514c8e511812] ...
	I0708 13:08:06.742436    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514c8e511812"
	I0708 13:08:06.754037    4087 logs.go:123] Gathering logs for kube-apiserver [7420b58631a6] ...
	I0708 13:08:06.754048    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7420b58631a6"
	I0708 13:08:06.786887    4087 logs.go:123] Gathering logs for kube-proxy [750b11fad6e2] ...
	I0708 13:08:06.786897    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750b11fad6e2"
	I0708 13:08:06.798363    4087 logs.go:123] Gathering logs for kube-scheduler [6dbdf148a964] ...
	I0708 13:08:06.798373    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbdf148a964"
	I0708 13:08:06.809991    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:08:06.810001    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:08:06.823075    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:08:06.823086    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:08:06.863153    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:08:06.863173    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:08:06.867642    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:08:06.867649    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:08:06.902713    4087 logs.go:123] Gathering logs for etcd [9693310828d2] ...
	I0708 13:08:06.902724    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9693310828d2"
	I0708 13:08:06.917094    4087 logs.go:123] Gathering logs for storage-provisioner [7d824b616b14] ...
	I0708 13:08:06.917104    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d824b616b14"
	I0708 13:08:06.928787    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:08:06.928797    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:08:06.951819    4087 logs.go:123] Gathering logs for etcd [1e89e3203798] ...
	I0708 13:08:06.951827    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e89e3203798"
	I0708 13:08:06.965846    4087 logs.go:123] Gathering logs for coredns [98fa118fd098] ...
	I0708 13:08:06.965857    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98fa118fd098"
	I0708 13:08:06.977538    4087 logs.go:123] Gathering logs for kube-scheduler [d192ae42697c] ...
	I0708 13:08:06.977550    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d192ae42697c"
	I0708 13:08:06.994216    4087 logs.go:123] Gathering logs for kube-controller-manager [fb1259fd60c1] ...
	I0708 13:08:06.994227    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1259fd60c1"
	I0708 13:08:09.509239    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:08:14.510851    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:08:14.511052    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:08:14.532505    4087 logs.go:276] 2 containers: [6ea05f4d18cc 7420b58631a6]
	I0708 13:08:14.532598    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:08:14.547984    4087 logs.go:276] 2 containers: [1e89e3203798 9693310828d2]
	I0708 13:08:14.548048    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:08:14.560787    4087 logs.go:276] 1 containers: [98fa118fd098]
	I0708 13:08:14.560848    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:08:14.572978    4087 logs.go:276] 2 containers: [6dbdf148a964 d192ae42697c]
	I0708 13:08:14.573064    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:08:14.586007    4087 logs.go:276] 1 containers: [750b11fad6e2]
	I0708 13:08:14.586080    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:08:14.596801    4087 logs.go:276] 2 containers: [e8da15772873 fb1259fd60c1]
	I0708 13:08:14.596865    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:08:14.607157    4087 logs.go:276] 0 containers: []
	W0708 13:08:14.607170    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:08:14.607229    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:08:14.618082    4087 logs.go:276] 2 containers: [7d824b616b14 514c8e511812]
	I0708 13:08:14.618105    4087 logs.go:123] Gathering logs for kube-controller-manager [fb1259fd60c1] ...
	I0708 13:08:14.618111    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1259fd60c1"
	I0708 13:08:14.632018    4087 logs.go:123] Gathering logs for kube-apiserver [7420b58631a6] ...
	I0708 13:08:14.632030    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7420b58631a6"
	I0708 13:08:14.661578    4087 logs.go:123] Gathering logs for kube-scheduler [d192ae42697c] ...
	I0708 13:08:14.661589    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d192ae42697c"
	I0708 13:08:14.678683    4087 logs.go:123] Gathering logs for kube-proxy [750b11fad6e2] ...
	I0708 13:08:14.678697    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750b11fad6e2"
	I0708 13:08:14.690405    4087 logs.go:123] Gathering logs for kube-apiserver [6ea05f4d18cc] ...
	I0708 13:08:14.690419    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea05f4d18cc"
	I0708 13:08:14.704950    4087 logs.go:123] Gathering logs for etcd [1e89e3203798] ...
	I0708 13:08:14.704964    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e89e3203798"
	I0708 13:08:14.718676    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:08:14.718687    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:08:14.742315    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:08:14.742322    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:08:14.753885    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:08:14.753899    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:08:14.791967    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:08:14.791976    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:08:14.796440    4087 logs.go:123] Gathering logs for coredns [98fa118fd098] ...
	I0708 13:08:14.796448    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98fa118fd098"
	I0708 13:08:14.808010    4087 logs.go:123] Gathering logs for kube-controller-manager [e8da15772873] ...
	I0708 13:08:14.808022    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8da15772873"
	I0708 13:08:14.825688    4087 logs.go:123] Gathering logs for storage-provisioner [7d824b616b14] ...
	I0708 13:08:14.825698    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d824b616b14"
	I0708 13:08:14.837403    4087 logs.go:123] Gathering logs for storage-provisioner [514c8e511812] ...
	I0708 13:08:14.837415    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514c8e511812"
	I0708 13:08:14.848443    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:08:14.848458    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:08:14.885487    4087 logs.go:123] Gathering logs for etcd [9693310828d2] ...
	I0708 13:08:14.885497    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9693310828d2"
	I0708 13:08:14.900041    4087 logs.go:123] Gathering logs for kube-scheduler [6dbdf148a964] ...
	I0708 13:08:14.900051    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbdf148a964"
	I0708 13:08:17.414861    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:08:22.416871    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:08:22.417204    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:08:22.451379    4087 logs.go:276] 2 containers: [6ea05f4d18cc 7420b58631a6]
	I0708 13:08:22.451508    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:08:22.469627    4087 logs.go:276] 2 containers: [1e89e3203798 9693310828d2]
	I0708 13:08:22.469714    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:08:22.484003    4087 logs.go:276] 1 containers: [98fa118fd098]
	I0708 13:08:22.484083    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:08:22.496368    4087 logs.go:276] 2 containers: [6dbdf148a964 d192ae42697c]
	I0708 13:08:22.496433    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:08:22.506790    4087 logs.go:276] 1 containers: [750b11fad6e2]
	I0708 13:08:22.506863    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:08:22.517415    4087 logs.go:276] 2 containers: [e8da15772873 fb1259fd60c1]
	I0708 13:08:22.517494    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:08:22.530281    4087 logs.go:276] 0 containers: []
	W0708 13:08:22.530292    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:08:22.530350    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:08:22.541339    4087 logs.go:276] 2 containers: [7d824b616b14 514c8e511812]
	I0708 13:08:22.541357    4087 logs.go:123] Gathering logs for kube-proxy [750b11fad6e2] ...
	I0708 13:08:22.541365    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750b11fad6e2"
	I0708 13:08:22.553128    4087 logs.go:123] Gathering logs for kube-controller-manager [fb1259fd60c1] ...
	I0708 13:08:22.553140    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1259fd60c1"
	I0708 13:08:22.568124    4087 logs.go:123] Gathering logs for storage-provisioner [514c8e511812] ...
	I0708 13:08:22.568137    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514c8e511812"
	I0708 13:08:22.579728    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:08:22.579739    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:08:22.584092    4087 logs.go:123] Gathering logs for coredns [98fa118fd098] ...
	I0708 13:08:22.584101    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98fa118fd098"
	I0708 13:08:22.595777    4087 logs.go:123] Gathering logs for kube-scheduler [6dbdf148a964] ...
	I0708 13:08:22.595792    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbdf148a964"
	I0708 13:08:22.610324    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:08:22.610336    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:08:22.635594    4087 logs.go:123] Gathering logs for etcd [9693310828d2] ...
	I0708 13:08:22.635604    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9693310828d2"
	I0708 13:08:22.650655    4087 logs.go:123] Gathering logs for kube-scheduler [d192ae42697c] ...
	I0708 13:08:22.650669    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d192ae42697c"
	I0708 13:08:22.665784    4087 logs.go:123] Gathering logs for storage-provisioner [7d824b616b14] ...
	I0708 13:08:22.665800    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d824b616b14"
	I0708 13:08:22.685201    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:08:22.685212    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:08:22.704466    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:08:22.704479    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:08:22.745921    4087 logs.go:123] Gathering logs for kube-apiserver [6ea05f4d18cc] ...
	I0708 13:08:22.745941    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea05f4d18cc"
	I0708 13:08:22.760701    4087 logs.go:123] Gathering logs for etcd [1e89e3203798] ...
	I0708 13:08:22.760716    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e89e3203798"
	I0708 13:08:22.780514    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:08:22.780526    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:08:22.816432    4087 logs.go:123] Gathering logs for kube-apiserver [7420b58631a6] ...
	I0708 13:08:22.816443    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7420b58631a6"
	I0708 13:08:22.841800    4087 logs.go:123] Gathering logs for kube-controller-manager [e8da15772873] ...
	I0708 13:08:22.841810    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8da15772873"
	I0708 13:08:25.362927    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:08:30.364909    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:08:30.365113    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:08:30.379857    4087 logs.go:276] 2 containers: [6ea05f4d18cc 7420b58631a6]
	I0708 13:08:30.379941    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:08:30.391545    4087 logs.go:276] 2 containers: [1e89e3203798 9693310828d2]
	I0708 13:08:30.391621    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:08:30.401611    4087 logs.go:276] 1 containers: [98fa118fd098]
	I0708 13:08:30.401683    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:08:30.412206    4087 logs.go:276] 2 containers: [6dbdf148a964 d192ae42697c]
	I0708 13:08:30.412280    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:08:30.422292    4087 logs.go:276] 1 containers: [750b11fad6e2]
	I0708 13:08:30.422368    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:08:30.432819    4087 logs.go:276] 2 containers: [e8da15772873 fb1259fd60c1]
	I0708 13:08:30.432891    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:08:30.442471    4087 logs.go:276] 0 containers: []
	W0708 13:08:30.442483    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:08:30.442539    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:08:30.460625    4087 logs.go:276] 2 containers: [7d824b616b14 514c8e511812]
	I0708 13:08:30.460652    4087 logs.go:123] Gathering logs for etcd [1e89e3203798] ...
	I0708 13:08:30.460658    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e89e3203798"
	I0708 13:08:30.474848    4087 logs.go:123] Gathering logs for coredns [98fa118fd098] ...
	I0708 13:08:30.474858    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98fa118fd098"
	I0708 13:08:30.486130    4087 logs.go:123] Gathering logs for kube-proxy [750b11fad6e2] ...
	I0708 13:08:30.486145    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750b11fad6e2"
	I0708 13:08:30.509034    4087 logs.go:123] Gathering logs for storage-provisioner [514c8e511812] ...
	I0708 13:08:30.509048    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514c8e511812"
	I0708 13:08:30.520617    4087 logs.go:123] Gathering logs for kube-apiserver [6ea05f4d18cc] ...
	I0708 13:08:30.520628    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea05f4d18cc"
	I0708 13:08:30.534410    4087 logs.go:123] Gathering logs for kube-apiserver [7420b58631a6] ...
	I0708 13:08:30.534422    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7420b58631a6"
	I0708 13:08:30.559865    4087 logs.go:123] Gathering logs for kube-controller-manager [e8da15772873] ...
	I0708 13:08:30.559876    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8da15772873"
	I0708 13:08:30.577397    4087 logs.go:123] Gathering logs for kube-controller-manager [fb1259fd60c1] ...
	I0708 13:08:30.577412    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1259fd60c1"
	I0708 13:08:30.591913    4087 logs.go:123] Gathering logs for storage-provisioner [7d824b616b14] ...
	I0708 13:08:30.591922    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d824b616b14"
	I0708 13:08:30.603205    4087 logs.go:123] Gathering logs for kube-scheduler [6dbdf148a964] ...
	I0708 13:08:30.603217    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbdf148a964"
	I0708 13:08:30.615930    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:08:30.615942    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:08:30.627402    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:08:30.627412    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:08:30.665381    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:08:30.665390    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:08:30.669483    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:08:30.669491    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:08:30.704007    4087 logs.go:123] Gathering logs for etcd [9693310828d2] ...
	I0708 13:08:30.704019    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9693310828d2"
	I0708 13:08:30.718451    4087 logs.go:123] Gathering logs for kube-scheduler [d192ae42697c] ...
	I0708 13:08:30.718462    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d192ae42697c"
	I0708 13:08:30.732974    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:08:30.732987    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:08:33.258544    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:08:38.260650    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:08:38.260819    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:08:38.273393    4087 logs.go:276] 2 containers: [6ea05f4d18cc 7420b58631a6]
	I0708 13:08:38.273474    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:08:38.288435    4087 logs.go:276] 2 containers: [1e89e3203798 9693310828d2]
	I0708 13:08:38.288506    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:08:38.299196    4087 logs.go:276] 1 containers: [98fa118fd098]
	I0708 13:08:38.299262    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:08:38.309978    4087 logs.go:276] 2 containers: [6dbdf148a964 d192ae42697c]
	I0708 13:08:38.310051    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:08:38.320875    4087 logs.go:276] 1 containers: [750b11fad6e2]
	I0708 13:08:38.320946    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:08:38.331696    4087 logs.go:276] 2 containers: [e8da15772873 fb1259fd60c1]
	I0708 13:08:38.331766    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:08:38.342044    4087 logs.go:276] 0 containers: []
	W0708 13:08:38.342055    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:08:38.342111    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:08:38.352992    4087 logs.go:276] 2 containers: [7d824b616b14 514c8e511812]
	I0708 13:08:38.353011    4087 logs.go:123] Gathering logs for kube-apiserver [6ea05f4d18cc] ...
	I0708 13:08:38.353017    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea05f4d18cc"
	I0708 13:08:38.368086    4087 logs.go:123] Gathering logs for kube-scheduler [6dbdf148a964] ...
	I0708 13:08:38.368096    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbdf148a964"
	I0708 13:08:38.379747    4087 logs.go:123] Gathering logs for storage-provisioner [7d824b616b14] ...
	I0708 13:08:38.379756    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d824b616b14"
	I0708 13:08:38.392309    4087 logs.go:123] Gathering logs for kube-controller-manager [fb1259fd60c1] ...
	I0708 13:08:38.392320    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1259fd60c1"
	I0708 13:08:38.406052    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:08:38.406064    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:08:38.441776    4087 logs.go:123] Gathering logs for kube-apiserver [7420b58631a6] ...
	I0708 13:08:38.441788    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7420b58631a6"
	I0708 13:08:38.465993    4087 logs.go:123] Gathering logs for etcd [9693310828d2] ...
	I0708 13:08:38.466003    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9693310828d2"
	I0708 13:08:38.481851    4087 logs.go:123] Gathering logs for kube-proxy [750b11fad6e2] ...
	I0708 13:08:38.481863    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750b11fad6e2"
	I0708 13:08:38.494329    4087 logs.go:123] Gathering logs for kube-controller-manager [e8da15772873] ...
	I0708 13:08:38.494339    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8da15772873"
	I0708 13:08:38.511319    4087 logs.go:123] Gathering logs for storage-provisioner [514c8e511812] ...
	I0708 13:08:38.511329    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514c8e511812"
	I0708 13:08:38.522603    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:08:38.522613    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:08:38.544740    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:08:38.544748    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:08:38.556423    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:08:38.556434    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:08:38.595042    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:08:38.595053    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:08:38.599054    4087 logs.go:123] Gathering logs for etcd [1e89e3203798] ...
	I0708 13:08:38.599062    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e89e3203798"
	I0708 13:08:38.613054    4087 logs.go:123] Gathering logs for coredns [98fa118fd098] ...
	I0708 13:08:38.613066    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98fa118fd098"
	I0708 13:08:38.625212    4087 logs.go:123] Gathering logs for kube-scheduler [d192ae42697c] ...
	I0708 13:08:38.625224    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d192ae42697c"
	I0708 13:08:41.144195    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:08:46.146377    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:08:46.146815    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:08:46.181474    4087 logs.go:276] 2 containers: [6ea05f4d18cc 7420b58631a6]
	I0708 13:08:46.181604    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:08:46.201288    4087 logs.go:276] 2 containers: [1e89e3203798 9693310828d2]
	I0708 13:08:46.201386    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:08:46.220289    4087 logs.go:276] 1 containers: [98fa118fd098]
	I0708 13:08:46.220367    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:08:46.238154    4087 logs.go:276] 2 containers: [6dbdf148a964 d192ae42697c]
	I0708 13:08:46.238228    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:08:46.248780    4087 logs.go:276] 1 containers: [750b11fad6e2]
	I0708 13:08:46.248855    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:08:46.260639    4087 logs.go:276] 2 containers: [e8da15772873 fb1259fd60c1]
	I0708 13:08:46.260714    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:08:46.273494    4087 logs.go:276] 0 containers: []
	W0708 13:08:46.273506    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:08:46.273570    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:08:46.284953    4087 logs.go:276] 2 containers: [7d824b616b14 514c8e511812]
	I0708 13:08:46.284973    4087 logs.go:123] Gathering logs for kube-controller-manager [fb1259fd60c1] ...
	I0708 13:08:46.284979    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1259fd60c1"
	I0708 13:08:46.298956    4087 logs.go:123] Gathering logs for storage-provisioner [7d824b616b14] ...
	I0708 13:08:46.298968    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d824b616b14"
	I0708 13:08:46.311177    4087 logs.go:123] Gathering logs for storage-provisioner [514c8e511812] ...
	I0708 13:08:46.311190    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514c8e511812"
	I0708 13:08:46.327754    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:08:46.327765    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:08:46.361608    4087 logs.go:123] Gathering logs for kube-apiserver [6ea05f4d18cc] ...
	I0708 13:08:46.361620    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea05f4d18cc"
	I0708 13:08:46.375849    4087 logs.go:123] Gathering logs for kube-apiserver [7420b58631a6] ...
	I0708 13:08:46.375859    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7420b58631a6"
	I0708 13:08:46.407509    4087 logs.go:123] Gathering logs for etcd [9693310828d2] ...
	I0708 13:08:46.407521    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9693310828d2"
	I0708 13:08:46.427403    4087 logs.go:123] Gathering logs for kube-controller-manager [e8da15772873] ...
	I0708 13:08:46.427413    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8da15772873"
	I0708 13:08:46.444987    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:08:46.444997    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:08:46.468867    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:08:46.468875    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:08:46.480792    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:08:46.480804    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:08:46.484833    4087 logs.go:123] Gathering logs for etcd [1e89e3203798] ...
	I0708 13:08:46.484838    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e89e3203798"
	I0708 13:08:46.504085    4087 logs.go:123] Gathering logs for kube-scheduler [6dbdf148a964] ...
	I0708 13:08:46.504098    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbdf148a964"
	I0708 13:08:46.520069    4087 logs.go:123] Gathering logs for kube-scheduler [d192ae42697c] ...
	I0708 13:08:46.520080    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d192ae42697c"
	I0708 13:08:46.535079    4087 logs.go:123] Gathering logs for kube-proxy [750b11fad6e2] ...
	I0708 13:08:46.535090    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750b11fad6e2"
	I0708 13:08:46.547409    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:08:46.547420    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:08:46.586341    4087 logs.go:123] Gathering logs for coredns [98fa118fd098] ...
	I0708 13:08:46.586350    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98fa118fd098"
	I0708 13:08:49.099325    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:08:54.101386    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:08:54.101535    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:08:54.116632    4087 logs.go:276] 2 containers: [6ea05f4d18cc 7420b58631a6]
	I0708 13:08:54.116717    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:08:54.128831    4087 logs.go:276] 2 containers: [1e89e3203798 9693310828d2]
	I0708 13:08:54.128897    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:08:54.139984    4087 logs.go:276] 1 containers: [98fa118fd098]
	I0708 13:08:54.140057    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:08:54.150907    4087 logs.go:276] 2 containers: [6dbdf148a964 d192ae42697c]
	I0708 13:08:54.150979    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:08:54.161796    4087 logs.go:276] 1 containers: [750b11fad6e2]
	I0708 13:08:54.161867    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:08:54.172353    4087 logs.go:276] 2 containers: [e8da15772873 fb1259fd60c1]
	I0708 13:08:54.172422    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:08:54.182819    4087 logs.go:276] 0 containers: []
	W0708 13:08:54.182829    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:08:54.182887    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:08:54.193002    4087 logs.go:276] 2 containers: [7d824b616b14 514c8e511812]
	I0708 13:08:54.193021    4087 logs.go:123] Gathering logs for kube-apiserver [7420b58631a6] ...
	I0708 13:08:54.193026    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7420b58631a6"
	I0708 13:08:54.218524    4087 logs.go:123] Gathering logs for etcd [1e89e3203798] ...
	I0708 13:08:54.218535    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e89e3203798"
	I0708 13:08:54.232123    4087 logs.go:123] Gathering logs for kube-scheduler [6dbdf148a964] ...
	I0708 13:08:54.232134    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dbdf148a964"
	I0708 13:08:54.243953    4087 logs.go:123] Gathering logs for storage-provisioner [514c8e511812] ...
	I0708 13:08:54.243966    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514c8e511812"
	I0708 13:08:54.260257    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:08:54.260269    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:08:54.264487    4087 logs.go:123] Gathering logs for kube-apiserver [6ea05f4d18cc] ...
	I0708 13:08:54.264495    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea05f4d18cc"
	I0708 13:08:54.278826    4087 logs.go:123] Gathering logs for storage-provisioner [7d824b616b14] ...
	I0708 13:08:54.278839    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d824b616b14"
	I0708 13:08:54.290204    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:08:54.290214    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:08:54.314471    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:08:54.314481    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:08:54.326149    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:08:54.326159    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:08:54.365759    4087 logs.go:123] Gathering logs for kube-scheduler [d192ae42697c] ...
	I0708 13:08:54.365769    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d192ae42697c"
	I0708 13:08:54.380309    4087 logs.go:123] Gathering logs for kube-controller-manager [fb1259fd60c1] ...
	I0708 13:08:54.380318    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1259fd60c1"
	I0708 13:08:54.394430    4087 logs.go:123] Gathering logs for coredns [98fa118fd098] ...
	I0708 13:08:54.394440    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98fa118fd098"
	I0708 13:08:54.405786    4087 logs.go:123] Gathering logs for kube-controller-manager [e8da15772873] ...
	I0708 13:08:54.405799    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8da15772873"
	I0708 13:08:54.424049    4087 logs.go:123] Gathering logs for kube-proxy [750b11fad6e2] ...
	I0708 13:08:54.424063    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750b11fad6e2"
	I0708 13:08:54.435778    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:08:54.435793    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:08:54.473009    4087 logs.go:123] Gathering logs for etcd [9693310828d2] ...
	I0708 13:08:54.473026    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9693310828d2"
	I0708 13:08:56.990578    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:09:01.992566    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:09:01.992614    4087 kubeadm.go:591] duration metric: took 4m3.616013542s to restartPrimaryControlPlane
	W0708 13:09:01.992650    4087 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0708 13:09:01.992666    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0708 13:09:02.985409    4087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0708 13:09:02.990365    4087 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 13:09:02.993258    4087 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 13:09:02.995968    4087 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 13:09:02.995975    4087 kubeadm.go:156] found existing configuration files:
	
	I0708 13:09:02.995993    4087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50600 /etc/kubernetes/admin.conf
	I0708 13:09:02.998400    4087 kubeadm.go:162] "https://control-plane.minikube.internal:50600" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50600 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0708 13:09:02.998420    4087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0708 13:09:03.000919    4087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50600 /etc/kubernetes/kubelet.conf
	I0708 13:09:03.003675    4087 kubeadm.go:162] "https://control-plane.minikube.internal:50600" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50600 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0708 13:09:03.003701    4087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0708 13:09:03.006266    4087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50600 /etc/kubernetes/controller-manager.conf
	I0708 13:09:03.008963    4087 kubeadm.go:162] "https://control-plane.minikube.internal:50600" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50600 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0708 13:09:03.008989    4087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0708 13:09:03.011983    4087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50600 /etc/kubernetes/scheduler.conf
	I0708 13:09:03.014520    4087 kubeadm.go:162] "https://control-plane.minikube.internal:50600" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50600 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0708 13:09:03.014543    4087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0708 13:09:03.017254    4087 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0708 13:09:03.035970    4087 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0708 13:09:03.036015    4087 kubeadm.go:309] [preflight] Running pre-flight checks
	I0708 13:09:03.084301    4087 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0708 13:09:03.084403    4087 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0708 13:09:03.084456    4087 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0708 13:09:03.132275    4087 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0708 13:09:03.139478    4087 out.go:204]   - Generating certificates and keys ...
	I0708 13:09:03.139513    4087 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0708 13:09:03.139551    4087 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0708 13:09:03.139600    4087 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0708 13:09:03.139641    4087 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0708 13:09:03.139678    4087 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0708 13:09:03.139706    4087 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0708 13:09:03.139740    4087 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0708 13:09:03.139777    4087 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0708 13:09:03.139817    4087 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0708 13:09:03.139866    4087 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0708 13:09:03.139886    4087 kubeadm.go:309] [certs] Using the existing "sa" key
	I0708 13:09:03.139917    4087 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0708 13:09:03.178717    4087 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0708 13:09:03.348723    4087 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0708 13:09:03.473750    4087 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0708 13:09:03.573249    4087 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0708 13:09:03.607213    4087 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0708 13:09:03.607631    4087 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0708 13:09:03.607743    4087 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0708 13:09:03.713892    4087 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0708 13:09:03.718119    4087 out.go:204]   - Booting up control plane ...
	I0708 13:09:03.718233    4087 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0708 13:09:03.718273    4087 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0708 13:09:03.718310    4087 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0708 13:09:03.718351    4087 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0708 13:09:03.718471    4087 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0708 13:09:08.218287    4087 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.502492 seconds
	I0708 13:09:08.218352    4087 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0708 13:09:08.221925    4087 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0708 13:09:08.734100    4087 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0708 13:09:08.737052    4087 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-170000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0708 13:09:09.240630    4087 kubeadm.go:309] [bootstrap-token] Using token: v9t5ul.rbt3mp7d4hs387ln
	I0708 13:09:09.247168    4087 out.go:204]   - Configuring RBAC rules ...
	I0708 13:09:09.247225    4087 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0708 13:09:09.247276    4087 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0708 13:09:09.249352    4087 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0708 13:09:09.250640    4087 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0708 13:09:09.251535    4087 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0708 13:09:09.252482    4087 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0708 13:09:09.255707    4087 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0708 13:09:09.454975    4087 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0708 13:09:09.644259    4087 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0708 13:09:09.644789    4087 kubeadm.go:309] 
	I0708 13:09:09.644824    4087 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0708 13:09:09.644851    4087 kubeadm.go:309] 
	I0708 13:09:09.644911    4087 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0708 13:09:09.644915    4087 kubeadm.go:309] 
	I0708 13:09:09.644931    4087 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0708 13:09:09.644959    4087 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0708 13:09:09.644992    4087 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0708 13:09:09.644999    4087 kubeadm.go:309] 
	I0708 13:09:09.645024    4087 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0708 13:09:09.645027    4087 kubeadm.go:309] 
	I0708 13:09:09.645047    4087 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0708 13:09:09.645049    4087 kubeadm.go:309] 
	I0708 13:09:09.645074    4087 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0708 13:09:09.645109    4087 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0708 13:09:09.645157    4087 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0708 13:09:09.645163    4087 kubeadm.go:309] 
	I0708 13:09:09.645203    4087 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0708 13:09:09.645243    4087 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0708 13:09:09.645247    4087 kubeadm.go:309] 
	I0708 13:09:09.645286    4087 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token v9t5ul.rbt3mp7d4hs387ln \
	I0708 13:09:09.645332    4087 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:230a71526e00c18db9a0775e630de2fb59560bfeed9e976d05ee095d6c2f986e \
	I0708 13:09:09.645341    4087 kubeadm.go:309] 	--control-plane 
	I0708 13:09:09.645345    4087 kubeadm.go:309] 
	I0708 13:09:09.645407    4087 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0708 13:09:09.645414    4087 kubeadm.go:309] 
	I0708 13:09:09.645465    4087 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token v9t5ul.rbt3mp7d4hs387ln \
	I0708 13:09:09.645517    4087 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:230a71526e00c18db9a0775e630de2fb59560bfeed9e976d05ee095d6c2f986e 
	I0708 13:09:09.646121    4087 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0708 13:09:09.646218    4087 cni.go:84] Creating CNI manager for ""
	I0708 13:09:09.646227    4087 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0708 13:09:09.650030    4087 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0708 13:09:09.657996    4087 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0708 13:09:09.661161    4087 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0708 13:09:09.666028    4087 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0708 13:09:09.666066    4087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 13:09:09.666154    4087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-170000 minikube.k8s.io/updated_at=2024_07_08T13_09_09_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=2dfbd68ba405aca732c579e607220b4538fd22ad minikube.k8s.io/name=stopped-upgrade-170000 minikube.k8s.io/primary=true
	I0708 13:09:09.709099    4087 kubeadm.go:1107] duration metric: took 43.064959ms to wait for elevateKubeSystemPrivileges
	I0708 13:09:09.709118    4087 ops.go:34] apiserver oom_adj: -16
	W0708 13:09:09.709142    4087 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0708 13:09:09.709149    4087 kubeadm.go:393] duration metric: took 4m11.345810833s to StartCluster
	I0708 13:09:09.709157    4087 settings.go:142] acquiring lock: {Name:mka0c397a57d617e1d77508d22cc3adb2edf5927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 13:09:09.709248    4087 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 13:09:09.709647    4087 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/kubeconfig: {Name:mkd06393ca6fb9ad91b614216d70dbd8a552e45d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 13:09:09.709868    4087 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 13:09:09.709892    4087 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0708 13:09:09.709964    4087 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-170000"
	I0708 13:09:09.709974    4087 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-170000"
	W0708 13:09:09.709977    4087 addons.go:243] addon storage-provisioner should already be in state true
	I0708 13:09:09.709988    4087 host.go:66] Checking if "stopped-upgrade-170000" exists ...
	I0708 13:09:09.709989    4087 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-170000"
	I0708 13:09:09.710003    4087 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-170000"
	I0708 13:09:09.710069    4087 config.go:182] Loaded profile config "stopped-upgrade-170000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0708 13:09:09.714019    4087 out.go:177] * Verifying Kubernetes components...
	I0708 13:09:09.714658    4087 kapi.go:59] client config for stopped-upgrade-170000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/stopped-upgrade-170000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/stopped-upgrade-170000/client.key", CAFile:"/Users/jenkins/minikube-integration/19195-1270/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10599f4f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0708 13:09:09.717295    4087 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-170000"
	W0708 13:09:09.717300    4087 addons.go:243] addon default-storageclass should already be in state true
	I0708 13:09:09.717308    4087 host.go:66] Checking if "stopped-upgrade-170000" exists ...
	I0708 13:09:09.717829    4087 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0708 13:09:09.717834    4087 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0708 13:09:09.717843    4087 sshutil.go:53] new ssh client: &{IP:localhost Port:50565 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/stopped-upgrade-170000/id_rsa Username:docker}
	I0708 13:09:09.720999    4087 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 13:09:09.724902    4087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0708 13:09:09.728999    4087 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 13:09:09.729010    4087 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0708 13:09:09.729020    4087 sshutil.go:53] new ssh client: &{IP:localhost Port:50565 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/stopped-upgrade-170000/id_rsa Username:docker}
	I0708 13:09:09.817715    4087 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0708 13:09:09.823350    4087 api_server.go:52] waiting for apiserver process to appear ...
	I0708 13:09:09.823400    4087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 13:09:09.827495    4087 api_server.go:72] duration metric: took 117.619ms to wait for apiserver process to appear ...
	I0708 13:09:09.827504    4087 api_server.go:88] waiting for apiserver healthz status ...
	I0708 13:09:09.827511    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:09:09.839021    4087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 13:09:09.898095    4087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0708 13:09:14.827803    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:09:14.827841    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:09:19.829327    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:09:19.829362    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:09:24.829414    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:09:24.829436    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:09:29.829527    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:09:29.829555    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:09:34.829778    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:09:34.829833    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:09:39.830392    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:09:39.830445    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0708 13:09:40.190448    4087 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0708 13:09:40.194423    4087 out.go:177] * Enabled addons: storage-provisioner
	I0708 13:09:40.206586    4087 addons.go:510] duration metric: took 30.497746542s for enable addons: enabled=[storage-provisioner]
	I0708 13:09:44.831009    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:09:44.831056    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:09:49.831828    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:09:49.831859    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:09:54.831951    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:09:54.831993    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:09:59.833060    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:09:59.833082    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:10:04.834411    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:10:04.834439    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:10:09.836218    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:10:09.836388    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:10:09.847740    4087 logs.go:276] 1 containers: [f1a2ddf0aafe]
	I0708 13:10:09.847814    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:10:09.858500    4087 logs.go:276] 1 containers: [394d12d0e434]
	I0708 13:10:09.858570    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:10:09.869751    4087 logs.go:276] 2 containers: [08c18ccb67ad a3a5c7f9cd83]
	I0708 13:10:09.869821    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:10:09.880464    4087 logs.go:276] 1 containers: [1820d067a412]
	I0708 13:10:09.880540    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:10:09.891414    4087 logs.go:276] 1 containers: [2ae0eece5059]
	I0708 13:10:09.891487    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:10:09.901873    4087 logs.go:276] 1 containers: [221b47a8d8b7]
	I0708 13:10:09.901941    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:10:09.912015    4087 logs.go:276] 0 containers: []
	W0708 13:10:09.912026    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:10:09.912084    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:10:09.922263    4087 logs.go:276] 1 containers: [4d826cf7702d]
	I0708 13:10:09.922279    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:10:09.922285    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:10:09.959389    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:10:09.959396    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:10:09.995189    4087 logs.go:123] Gathering logs for kube-apiserver [f1a2ddf0aafe] ...
	I0708 13:10:09.995205    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1a2ddf0aafe"
	I0708 13:10:10.009090    4087 logs.go:123] Gathering logs for coredns [a3a5c7f9cd83] ...
	I0708 13:10:10.009105    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a5c7f9cd83"
	I0708 13:10:10.024302    4087 logs.go:123] Gathering logs for kube-scheduler [1820d067a412] ...
	I0708 13:10:10.024315    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1820d067a412"
	I0708 13:10:10.039341    4087 logs.go:123] Gathering logs for kube-proxy [2ae0eece5059] ...
	I0708 13:10:10.039351    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ae0eece5059"
	I0708 13:10:10.053251    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:10:10.053266    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:10:10.057828    4087 logs.go:123] Gathering logs for etcd [394d12d0e434] ...
	I0708 13:10:10.057837    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 394d12d0e434"
	I0708 13:10:10.071757    4087 logs.go:123] Gathering logs for coredns [08c18ccb67ad] ...
	I0708 13:10:10.071773    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c18ccb67ad"
	I0708 13:10:10.084416    4087 logs.go:123] Gathering logs for kube-controller-manager [221b47a8d8b7] ...
	I0708 13:10:10.084427    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221b47a8d8b7"
	I0708 13:10:10.102879    4087 logs.go:123] Gathering logs for storage-provisioner [4d826cf7702d] ...
	I0708 13:10:10.102890    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d826cf7702d"
	I0708 13:10:10.114034    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:10:10.114043    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:10:10.138830    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:10:10.138837    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:10:12.651578    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:10:17.652508    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:10:17.652613    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:10:17.666823    4087 logs.go:276] 1 containers: [f1a2ddf0aafe]
	I0708 13:10:17.666896    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:10:17.679845    4087 logs.go:276] 1 containers: [394d12d0e434]
	I0708 13:10:17.679915    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:10:17.692342    4087 logs.go:276] 2 containers: [08c18ccb67ad a3a5c7f9cd83]
	I0708 13:10:17.692448    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:10:17.705134    4087 logs.go:276] 1 containers: [1820d067a412]
	I0708 13:10:17.705196    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:10:17.717367    4087 logs.go:276] 1 containers: [2ae0eece5059]
	I0708 13:10:17.717437    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:10:17.729912    4087 logs.go:276] 1 containers: [221b47a8d8b7]
	I0708 13:10:17.729976    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:10:17.759286    4087 logs.go:276] 0 containers: []
	W0708 13:10:17.759300    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:10:17.759374    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:10:17.779780    4087 logs.go:276] 1 containers: [4d826cf7702d]
	I0708 13:10:17.779798    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:10:17.779804    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:10:17.818622    4087 logs.go:123] Gathering logs for kube-apiserver [f1a2ddf0aafe] ...
	I0708 13:10:17.818634    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1a2ddf0aafe"
	I0708 13:10:17.834445    4087 logs.go:123] Gathering logs for coredns [08c18ccb67ad] ...
	I0708 13:10:17.834456    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c18ccb67ad"
	I0708 13:10:17.846957    4087 logs.go:123] Gathering logs for coredns [a3a5c7f9cd83] ...
	I0708 13:10:17.846970    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a5c7f9cd83"
	I0708 13:10:17.860373    4087 logs.go:123] Gathering logs for kube-controller-manager [221b47a8d8b7] ...
	I0708 13:10:17.860386    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221b47a8d8b7"
	I0708 13:10:17.883811    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:10:17.883828    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:10:17.897490    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:10:17.897503    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:10:17.925295    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:10:17.925312    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:10:17.969891    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:10:17.969908    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:10:17.989523    4087 logs.go:123] Gathering logs for etcd [394d12d0e434] ...
	I0708 13:10:17.989535    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 394d12d0e434"
	I0708 13:10:18.011614    4087 logs.go:123] Gathering logs for kube-scheduler [1820d067a412] ...
	I0708 13:10:18.011633    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1820d067a412"
	I0708 13:10:18.033589    4087 logs.go:123] Gathering logs for kube-proxy [2ae0eece5059] ...
	I0708 13:10:18.033607    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ae0eece5059"
	I0708 13:10:18.050271    4087 logs.go:123] Gathering logs for storage-provisioner [4d826cf7702d] ...
	I0708 13:10:18.050287    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d826cf7702d"
	I0708 13:10:20.580865    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:10:25.582947    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:10:25.583189    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:10:25.608592    4087 logs.go:276] 1 containers: [f1a2ddf0aafe]
	I0708 13:10:25.608714    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:10:25.626258    4087 logs.go:276] 1 containers: [394d12d0e434]
	I0708 13:10:25.626346    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:10:25.645731    4087 logs.go:276] 2 containers: [08c18ccb67ad a3a5c7f9cd83]
	I0708 13:10:25.645811    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:10:25.656804    4087 logs.go:276] 1 containers: [1820d067a412]
	I0708 13:10:25.656873    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:10:25.667405    4087 logs.go:276] 1 containers: [2ae0eece5059]
	I0708 13:10:25.667474    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:10:25.677842    4087 logs.go:276] 1 containers: [221b47a8d8b7]
	I0708 13:10:25.677911    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:10:25.687639    4087 logs.go:276] 0 containers: []
	W0708 13:10:25.687653    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:10:25.687708    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:10:25.697554    4087 logs.go:276] 1 containers: [4d826cf7702d]
	I0708 13:10:25.697569    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:10:25.697574    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:10:25.721568    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:10:25.721578    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:10:25.736848    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:10:25.736862    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:10:25.774051    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:10:25.774060    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:10:25.778137    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:10:25.778145    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:10:25.812745    4087 logs.go:123] Gathering logs for kube-apiserver [f1a2ddf0aafe] ...
	I0708 13:10:25.812759    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1a2ddf0aafe"
	I0708 13:10:25.827526    4087 logs.go:123] Gathering logs for coredns [08c18ccb67ad] ...
	I0708 13:10:25.827538    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c18ccb67ad"
	I0708 13:10:25.839026    4087 logs.go:123] Gathering logs for kube-proxy [2ae0eece5059] ...
	I0708 13:10:25.839040    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ae0eece5059"
	I0708 13:10:25.850121    4087 logs.go:123] Gathering logs for etcd [394d12d0e434] ...
	I0708 13:10:25.850135    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 394d12d0e434"
	I0708 13:10:25.866075    4087 logs.go:123] Gathering logs for coredns [a3a5c7f9cd83] ...
	I0708 13:10:25.866086    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a5c7f9cd83"
	I0708 13:10:25.877714    4087 logs.go:123] Gathering logs for kube-scheduler [1820d067a412] ...
	I0708 13:10:25.877725    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1820d067a412"
	I0708 13:10:25.898998    4087 logs.go:123] Gathering logs for kube-controller-manager [221b47a8d8b7] ...
	I0708 13:10:25.899008    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221b47a8d8b7"
	I0708 13:10:25.917542    4087 logs.go:123] Gathering logs for storage-provisioner [4d826cf7702d] ...
	I0708 13:10:25.917553    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d826cf7702d"
	I0708 13:10:28.431766    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:10:33.432655    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:10:33.433069    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:10:33.467362    4087 logs.go:276] 1 containers: [f1a2ddf0aafe]
	I0708 13:10:33.467493    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:10:33.488046    4087 logs.go:276] 1 containers: [394d12d0e434]
	I0708 13:10:33.488137    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:10:33.502698    4087 logs.go:276] 2 containers: [08c18ccb67ad a3a5c7f9cd83]
	I0708 13:10:33.502774    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:10:33.522722    4087 logs.go:276] 1 containers: [1820d067a412]
	I0708 13:10:33.522792    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:10:33.542217    4087 logs.go:276] 1 containers: [2ae0eece5059]
	I0708 13:10:33.542295    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:10:33.552899    4087 logs.go:276] 1 containers: [221b47a8d8b7]
	I0708 13:10:33.552963    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:10:33.563339    4087 logs.go:276] 0 containers: []
	W0708 13:10:33.563349    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:10:33.563406    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:10:33.573827    4087 logs.go:276] 1 containers: [4d826cf7702d]
	I0708 13:10:33.573842    4087 logs.go:123] Gathering logs for kube-scheduler [1820d067a412] ...
	I0708 13:10:33.573847    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1820d067a412"
	I0708 13:10:33.588906    4087 logs.go:123] Gathering logs for kube-controller-manager [221b47a8d8b7] ...
	I0708 13:10:33.588919    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221b47a8d8b7"
	I0708 13:10:33.606548    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:10:33.606559    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:10:33.617834    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:10:33.617845    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:10:33.657094    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:10:33.657114    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:10:33.661648    4087 logs.go:123] Gathering logs for kube-apiserver [f1a2ddf0aafe] ...
	I0708 13:10:33.661655    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1a2ddf0aafe"
	I0708 13:10:33.676044    4087 logs.go:123] Gathering logs for etcd [394d12d0e434] ...
	I0708 13:10:33.676055    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 394d12d0e434"
	I0708 13:10:33.690157    4087 logs.go:123] Gathering logs for coredns [08c18ccb67ad] ...
	I0708 13:10:33.690167    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c18ccb67ad"
	I0708 13:10:33.701155    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:10:33.701168    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:10:33.742411    4087 logs.go:123] Gathering logs for coredns [a3a5c7f9cd83] ...
	I0708 13:10:33.742421    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a5c7f9cd83"
	I0708 13:10:33.753892    4087 logs.go:123] Gathering logs for kube-proxy [2ae0eece5059] ...
	I0708 13:10:33.753902    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ae0eece5059"
	I0708 13:10:33.765774    4087 logs.go:123] Gathering logs for storage-provisioner [4d826cf7702d] ...
	I0708 13:10:33.765785    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d826cf7702d"
	I0708 13:10:33.776952    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:10:33.776963    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:10:36.304322    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:10:41.307046    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:10:41.307466    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:10:41.347891    4087 logs.go:276] 1 containers: [f1a2ddf0aafe]
	I0708 13:10:41.348035    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:10:41.369142    4087 logs.go:276] 1 containers: [394d12d0e434]
	I0708 13:10:41.369247    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:10:41.384788    4087 logs.go:276] 2 containers: [08c18ccb67ad a3a5c7f9cd83]
	I0708 13:10:41.384860    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:10:41.396833    4087 logs.go:276] 1 containers: [1820d067a412]
	I0708 13:10:41.396907    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:10:41.409589    4087 logs.go:276] 1 containers: [2ae0eece5059]
	I0708 13:10:41.409657    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:10:41.420254    4087 logs.go:276] 1 containers: [221b47a8d8b7]
	I0708 13:10:41.420327    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:10:41.430179    4087 logs.go:276] 0 containers: []
	W0708 13:10:41.430188    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:10:41.430238    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:10:41.442005    4087 logs.go:276] 1 containers: [4d826cf7702d]
	I0708 13:10:41.442022    4087 logs.go:123] Gathering logs for kube-proxy [2ae0eece5059] ...
	I0708 13:10:41.442028    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ae0eece5059"
	I0708 13:10:41.453374    4087 logs.go:123] Gathering logs for kube-controller-manager [221b47a8d8b7] ...
	I0708 13:10:41.453387    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221b47a8d8b7"
	I0708 13:10:41.471074    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:10:41.471085    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:10:41.495379    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:10:41.495388    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:10:41.508635    4087 logs.go:123] Gathering logs for kube-scheduler [1820d067a412] ...
	I0708 13:10:41.508646    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1820d067a412"
	I0708 13:10:41.524342    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:10:41.524355    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:10:41.528990    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:10:41.528998    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:10:41.565685    4087 logs.go:123] Gathering logs for kube-apiserver [f1a2ddf0aafe] ...
	I0708 13:10:41.565697    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1a2ddf0aafe"
	I0708 13:10:41.585265    4087 logs.go:123] Gathering logs for etcd [394d12d0e434] ...
	I0708 13:10:41.585276    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 394d12d0e434"
	I0708 13:10:41.599124    4087 logs.go:123] Gathering logs for coredns [08c18ccb67ad] ...
	I0708 13:10:41.599134    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c18ccb67ad"
	I0708 13:10:41.611009    4087 logs.go:123] Gathering logs for coredns [a3a5c7f9cd83] ...
	I0708 13:10:41.611019    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a5c7f9cd83"
	I0708 13:10:41.622659    4087 logs.go:123] Gathering logs for storage-provisioner [4d826cf7702d] ...
	I0708 13:10:41.622669    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d826cf7702d"
	I0708 13:10:41.637135    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:10:41.637145    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:10:44.178531    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:10:49.180857    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:10:49.181289    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:10:49.224740    4087 logs.go:276] 1 containers: [f1a2ddf0aafe]
	I0708 13:10:49.224882    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:10:49.245673    4087 logs.go:276] 1 containers: [394d12d0e434]
	I0708 13:10:49.245779    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:10:49.260061    4087 logs.go:276] 2 containers: [08c18ccb67ad a3a5c7f9cd83]
	I0708 13:10:49.260131    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:10:49.275593    4087 logs.go:276] 1 containers: [1820d067a412]
	I0708 13:10:49.275658    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:10:49.286137    4087 logs.go:276] 1 containers: [2ae0eece5059]
	I0708 13:10:49.286198    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:10:49.296385    4087 logs.go:276] 1 containers: [221b47a8d8b7]
	I0708 13:10:49.296442    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:10:49.306891    4087 logs.go:276] 0 containers: []
	W0708 13:10:49.306903    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:10:49.306960    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:10:49.317364    4087 logs.go:276] 1 containers: [4d826cf7702d]
	I0708 13:10:49.317382    4087 logs.go:123] Gathering logs for coredns [08c18ccb67ad] ...
	I0708 13:10:49.317388    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c18ccb67ad"
	I0708 13:10:49.329092    4087 logs.go:123] Gathering logs for coredns [a3a5c7f9cd83] ...
	I0708 13:10:49.329107    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a5c7f9cd83"
	I0708 13:10:49.340890    4087 logs.go:123] Gathering logs for kube-scheduler [1820d067a412] ...
	I0708 13:10:49.340904    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1820d067a412"
	I0708 13:10:49.356226    4087 logs.go:123] Gathering logs for kube-proxy [2ae0eece5059] ...
	I0708 13:10:49.356236    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ae0eece5059"
	I0708 13:10:49.369497    4087 logs.go:123] Gathering logs for kube-controller-manager [221b47a8d8b7] ...
	I0708 13:10:49.369511    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221b47a8d8b7"
	I0708 13:10:49.391165    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:10:49.391173    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:10:49.429907    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:10:49.429916    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:10:49.434241    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:10:49.434251    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:10:49.471636    4087 logs.go:123] Gathering logs for storage-provisioner [4d826cf7702d] ...
	I0708 13:10:49.471647    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d826cf7702d"
	I0708 13:10:49.484949    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:10:49.484961    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:10:49.511169    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:10:49.511184    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:10:49.523340    4087 logs.go:123] Gathering logs for kube-apiserver [f1a2ddf0aafe] ...
	I0708 13:10:49.523352    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1a2ddf0aafe"
	I0708 13:10:49.542289    4087 logs.go:123] Gathering logs for etcd [394d12d0e434] ...
	I0708 13:10:49.542300    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 394d12d0e434"
	I0708 13:10:52.059044    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:10:57.061663    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:10:57.062116    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:10:57.100668    4087 logs.go:276] 1 containers: [f1a2ddf0aafe]
	I0708 13:10:57.100798    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:10:57.128268    4087 logs.go:276] 1 containers: [394d12d0e434]
	I0708 13:10:57.128382    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:10:57.142891    4087 logs.go:276] 2 containers: [08c18ccb67ad a3a5c7f9cd83]
	I0708 13:10:57.142966    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:10:57.153964    4087 logs.go:276] 1 containers: [1820d067a412]
	I0708 13:10:57.154026    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:10:57.164816    4087 logs.go:276] 1 containers: [2ae0eece5059]
	I0708 13:10:57.164887    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:10:57.179883    4087 logs.go:276] 1 containers: [221b47a8d8b7]
	I0708 13:10:57.179949    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:10:57.190481    4087 logs.go:276] 0 containers: []
	W0708 13:10:57.190493    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:10:57.190548    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:10:57.200809    4087 logs.go:276] 1 containers: [4d826cf7702d]
	I0708 13:10:57.200823    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:10:57.200828    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:10:57.212095    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:10:57.212108    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:10:57.249147    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:10:57.249155    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:10:57.253453    4087 logs.go:123] Gathering logs for coredns [08c18ccb67ad] ...
	I0708 13:10:57.253462    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c18ccb67ad"
	I0708 13:10:57.264783    4087 logs.go:123] Gathering logs for kube-proxy [2ae0eece5059] ...
	I0708 13:10:57.264796    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ae0eece5059"
	I0708 13:10:57.276573    4087 logs.go:123] Gathering logs for kube-scheduler [1820d067a412] ...
	I0708 13:10:57.276587    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1820d067a412"
	I0708 13:10:57.291807    4087 logs.go:123] Gathering logs for kube-controller-manager [221b47a8d8b7] ...
	I0708 13:10:57.291820    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221b47a8d8b7"
	I0708 13:10:57.310264    4087 logs.go:123] Gathering logs for storage-provisioner [4d826cf7702d] ...
	I0708 13:10:57.310276    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d826cf7702d"
	I0708 13:10:57.321939    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:10:57.321948    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:10:57.345773    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:10:57.345781    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:10:57.379780    4087 logs.go:123] Gathering logs for kube-apiserver [f1a2ddf0aafe] ...
	I0708 13:10:57.379793    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1a2ddf0aafe"
	I0708 13:10:57.394613    4087 logs.go:123] Gathering logs for etcd [394d12d0e434] ...
	I0708 13:10:57.394626    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 394d12d0e434"
	I0708 13:10:57.411972    4087 logs.go:123] Gathering logs for coredns [a3a5c7f9cd83] ...
	I0708 13:10:57.411984    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a5c7f9cd83"
	I0708 13:10:59.925847    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:11:04.928329    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:11:04.928820    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:11:04.969293    4087 logs.go:276] 1 containers: [f1a2ddf0aafe]
	I0708 13:11:04.969427    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:11:04.991160    4087 logs.go:276] 1 containers: [394d12d0e434]
	I0708 13:11:04.991274    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:11:05.006109    4087 logs.go:276] 2 containers: [08c18ccb67ad a3a5c7f9cd83]
	I0708 13:11:05.006180    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:11:05.022409    4087 logs.go:276] 1 containers: [1820d067a412]
	I0708 13:11:05.022478    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:11:05.032614    4087 logs.go:276] 1 containers: [2ae0eece5059]
	I0708 13:11:05.032680    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:11:05.043808    4087 logs.go:276] 1 containers: [221b47a8d8b7]
	I0708 13:11:05.043879    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:11:05.054629    4087 logs.go:276] 0 containers: []
	W0708 13:11:05.054642    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:11:05.054696    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:11:05.065559    4087 logs.go:276] 1 containers: [4d826cf7702d]
	I0708 13:11:05.065575    4087 logs.go:123] Gathering logs for etcd [394d12d0e434] ...
	I0708 13:11:05.065581    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 394d12d0e434"
	I0708 13:11:05.084620    4087 logs.go:123] Gathering logs for kube-proxy [2ae0eece5059] ...
	I0708 13:11:05.084633    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ae0eece5059"
	I0708 13:11:05.096093    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:11:05.096107    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:11:05.121178    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:11:05.121184    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:11:05.133982    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:11:05.133995    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:11:05.170716    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:11:05.170727    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:11:05.174751    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:11:05.174759    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:11:05.208482    4087 logs.go:123] Gathering logs for kube-apiserver [f1a2ddf0aafe] ...
	I0708 13:11:05.208497    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1a2ddf0aafe"
	I0708 13:11:05.223000    4087 logs.go:123] Gathering logs for coredns [08c18ccb67ad] ...
	I0708 13:11:05.223010    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c18ccb67ad"
	I0708 13:11:05.235630    4087 logs.go:123] Gathering logs for coredns [a3a5c7f9cd83] ...
	I0708 13:11:05.235640    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a5c7f9cd83"
	I0708 13:11:05.251048    4087 logs.go:123] Gathering logs for kube-scheduler [1820d067a412] ...
	I0708 13:11:05.251060    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1820d067a412"
	I0708 13:11:05.267457    4087 logs.go:123] Gathering logs for kube-controller-manager [221b47a8d8b7] ...
	I0708 13:11:05.267470    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221b47a8d8b7"
	I0708 13:11:05.286836    4087 logs.go:123] Gathering logs for storage-provisioner [4d826cf7702d] ...
	I0708 13:11:05.286848    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d826cf7702d"
	I0708 13:11:07.801897    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:11:12.804272    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:11:12.804530    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:11:12.834930    4087 logs.go:276] 1 containers: [f1a2ddf0aafe]
	I0708 13:11:12.835087    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:11:12.854487    4087 logs.go:276] 1 containers: [394d12d0e434]
	I0708 13:11:12.854589    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:11:12.869044    4087 logs.go:276] 4 containers: [3a5ddcecbef8 5d59f1ef8605 08c18ccb67ad a3a5c7f9cd83]
	I0708 13:11:12.869123    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:11:12.880669    4087 logs.go:276] 1 containers: [1820d067a412]
	I0708 13:11:12.880734    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:11:12.891075    4087 logs.go:276] 1 containers: [2ae0eece5059]
	I0708 13:11:12.891144    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:11:12.901559    4087 logs.go:276] 1 containers: [221b47a8d8b7]
	I0708 13:11:12.901628    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:11:12.911857    4087 logs.go:276] 0 containers: []
	W0708 13:11:12.911869    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:11:12.911925    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:11:12.921892    4087 logs.go:276] 1 containers: [4d826cf7702d]
	I0708 13:11:12.921916    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:11:12.921921    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:11:12.955691    4087 logs.go:123] Gathering logs for etcd [394d12d0e434] ...
	I0708 13:11:12.955708    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 394d12d0e434"
	I0708 13:11:12.969937    4087 logs.go:123] Gathering logs for coredns [08c18ccb67ad] ...
	I0708 13:11:12.969950    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c18ccb67ad"
	I0708 13:11:12.981122    4087 logs.go:123] Gathering logs for kube-apiserver [f1a2ddf0aafe] ...
	I0708 13:11:12.981134    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1a2ddf0aafe"
	I0708 13:11:12.995042    4087 logs.go:123] Gathering logs for coredns [5d59f1ef8605] ...
	I0708 13:11:12.995056    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d59f1ef8605"
	I0708 13:11:13.010391    4087 logs.go:123] Gathering logs for coredns [a3a5c7f9cd83] ...
	I0708 13:11:13.010401    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a5c7f9cd83"
	I0708 13:11:13.022825    4087 logs.go:123] Gathering logs for kube-scheduler [1820d067a412] ...
	I0708 13:11:13.022838    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1820d067a412"
	I0708 13:11:13.045749    4087 logs.go:123] Gathering logs for kube-controller-manager [221b47a8d8b7] ...
	I0708 13:11:13.045763    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221b47a8d8b7"
	I0708 13:11:13.063438    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:11:13.063448    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:11:13.075270    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:11:13.075280    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:11:13.114535    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:11:13.114543    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:11:13.118504    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:11:13.118513    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:11:13.143105    4087 logs.go:123] Gathering logs for coredns [3a5ddcecbef8] ...
	I0708 13:11:13.143111    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a5ddcecbef8"
	I0708 13:11:13.153808    4087 logs.go:123] Gathering logs for kube-proxy [2ae0eece5059] ...
	I0708 13:11:13.153820    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ae0eece5059"
	I0708 13:11:13.165692    4087 logs.go:123] Gathering logs for storage-provisioner [4d826cf7702d] ...
	I0708 13:11:13.165701    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d826cf7702d"
	I0708 13:11:15.677367    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:11:20.679589    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:11:20.679999    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:11:20.716508    4087 logs.go:276] 1 containers: [f1a2ddf0aafe]
	I0708 13:11:20.716640    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:11:20.737559    4087 logs.go:276] 1 containers: [394d12d0e434]
	I0708 13:11:20.737649    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:11:20.752518    4087 logs.go:276] 4 containers: [3a5ddcecbef8 5d59f1ef8605 08c18ccb67ad a3a5c7f9cd83]
	I0708 13:11:20.752601    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:11:20.765126    4087 logs.go:276] 1 containers: [1820d067a412]
	I0708 13:11:20.765196    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:11:20.775526    4087 logs.go:276] 1 containers: [2ae0eece5059]
	I0708 13:11:20.775590    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:11:20.786041    4087 logs.go:276] 1 containers: [221b47a8d8b7]
	I0708 13:11:20.786111    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:11:20.796488    4087 logs.go:276] 0 containers: []
	W0708 13:11:20.796501    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:11:20.796554    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:11:20.806523    4087 logs.go:276] 1 containers: [4d826cf7702d]
	I0708 13:11:20.806541    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:11:20.806546    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:11:20.810785    4087 logs.go:123] Gathering logs for kube-apiserver [f1a2ddf0aafe] ...
	I0708 13:11:20.810794    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1a2ddf0aafe"
	I0708 13:11:20.825044    4087 logs.go:123] Gathering logs for kube-proxy [2ae0eece5059] ...
	I0708 13:11:20.825055    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ae0eece5059"
	I0708 13:11:20.837509    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:11:20.837521    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:11:20.861435    4087 logs.go:123] Gathering logs for etcd [394d12d0e434] ...
	I0708 13:11:20.861450    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 394d12d0e434"
	I0708 13:11:20.877198    4087 logs.go:123] Gathering logs for coredns [5d59f1ef8605] ...
	I0708 13:11:20.877216    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d59f1ef8605"
	I0708 13:11:20.895100    4087 logs.go:123] Gathering logs for coredns [08c18ccb67ad] ...
	I0708 13:11:20.895110    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c18ccb67ad"
	I0708 13:11:20.906745    4087 logs.go:123] Gathering logs for storage-provisioner [4d826cf7702d] ...
	I0708 13:11:20.906756    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d826cf7702d"
	I0708 13:11:20.918570    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:11:20.918582    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:11:20.930088    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:11:20.930100    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:11:20.966395    4087 logs.go:123] Gathering logs for kube-controller-manager [221b47a8d8b7] ...
	I0708 13:11:20.966409    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221b47a8d8b7"
	I0708 13:11:20.984546    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:11:20.984557    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:11:21.021069    4087 logs.go:123] Gathering logs for coredns [3a5ddcecbef8] ...
	I0708 13:11:21.021078    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a5ddcecbef8"
	I0708 13:11:21.032244    4087 logs.go:123] Gathering logs for coredns [a3a5c7f9cd83] ...
	I0708 13:11:21.032255    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a5c7f9cd83"
	I0708 13:11:21.044190    4087 logs.go:123] Gathering logs for kube-scheduler [1820d067a412] ...
	I0708 13:11:21.044204    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1820d067a412"
	I0708 13:11:23.561328    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:11:28.563579    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:11:28.564003    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:11:28.609892    4087 logs.go:276] 1 containers: [f1a2ddf0aafe]
	I0708 13:11:28.610027    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:11:28.630470    4087 logs.go:276] 1 containers: [394d12d0e434]
	I0708 13:11:28.630557    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:11:28.645835    4087 logs.go:276] 4 containers: [3a5ddcecbef8 5d59f1ef8605 08c18ccb67ad a3a5c7f9cd83]
	I0708 13:11:28.645911    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:11:28.659648    4087 logs.go:276] 1 containers: [1820d067a412]
	I0708 13:11:28.659711    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:11:28.670851    4087 logs.go:276] 1 containers: [2ae0eece5059]
	I0708 13:11:28.670920    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:11:28.682348    4087 logs.go:276] 1 containers: [221b47a8d8b7]
	I0708 13:11:28.682416    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:11:28.692783    4087 logs.go:276] 0 containers: []
	W0708 13:11:28.692793    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:11:28.692848    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:11:28.703484    4087 logs.go:276] 1 containers: [4d826cf7702d]
	I0708 13:11:28.703504    4087 logs.go:123] Gathering logs for kube-apiserver [f1a2ddf0aafe] ...
	I0708 13:11:28.703510    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1a2ddf0aafe"
	I0708 13:11:28.718701    4087 logs.go:123] Gathering logs for etcd [394d12d0e434] ...
	I0708 13:11:28.718714    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 394d12d0e434"
	I0708 13:11:28.732818    4087 logs.go:123] Gathering logs for storage-provisioner [4d826cf7702d] ...
	I0708 13:11:28.732831    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d826cf7702d"
	I0708 13:11:28.744764    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:11:28.744776    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:11:28.769872    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:11:28.769879    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:11:28.781646    4087 logs.go:123] Gathering logs for coredns [3a5ddcecbef8] ...
	I0708 13:11:28.781659    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a5ddcecbef8"
	I0708 13:11:28.793617    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:11:28.793625    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:11:28.832277    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:11:28.832287    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:11:28.836905    4087 logs.go:123] Gathering logs for coredns [5d59f1ef8605] ...
	I0708 13:11:28.836911    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d59f1ef8605"
	I0708 13:11:28.848478    4087 logs.go:123] Gathering logs for coredns [a3a5c7f9cd83] ...
	I0708 13:11:28.848489    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a5c7f9cd83"
	I0708 13:11:28.870129    4087 logs.go:123] Gathering logs for kube-scheduler [1820d067a412] ...
	I0708 13:11:28.870144    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1820d067a412"
	I0708 13:11:28.885574    4087 logs.go:123] Gathering logs for kube-proxy [2ae0eece5059] ...
	I0708 13:11:28.885583    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ae0eece5059"
	I0708 13:11:28.897754    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:11:28.897764    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:11:28.931482    4087 logs.go:123] Gathering logs for coredns [08c18ccb67ad] ...
	I0708 13:11:28.931499    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c18ccb67ad"
	I0708 13:11:28.943122    4087 logs.go:123] Gathering logs for kube-controller-manager [221b47a8d8b7] ...
	I0708 13:11:28.943136    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221b47a8d8b7"
	I0708 13:11:31.463298    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:11:36.465776    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:11:36.466225    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:11:36.505844    4087 logs.go:276] 1 containers: [f1a2ddf0aafe]
	I0708 13:11:36.505988    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:11:36.527873    4087 logs.go:276] 1 containers: [394d12d0e434]
	I0708 13:11:36.527985    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:11:36.544137    4087 logs.go:276] 4 containers: [3a5ddcecbef8 5d59f1ef8605 08c18ccb67ad a3a5c7f9cd83]
	I0708 13:11:36.544204    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:11:36.557229    4087 logs.go:276] 1 containers: [1820d067a412]
	I0708 13:11:36.557292    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:11:36.568834    4087 logs.go:276] 1 containers: [2ae0eece5059]
	I0708 13:11:36.568903    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:11:36.579315    4087 logs.go:276] 1 containers: [221b47a8d8b7]
	I0708 13:11:36.579381    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:11:36.590295    4087 logs.go:276] 0 containers: []
	W0708 13:11:36.590309    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:11:36.590370    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:11:36.601699    4087 logs.go:276] 1 containers: [4d826cf7702d]
	I0708 13:11:36.601717    4087 logs.go:123] Gathering logs for coredns [3a5ddcecbef8] ...
	I0708 13:11:36.601722    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a5ddcecbef8"
	I0708 13:11:36.613210    4087 logs.go:123] Gathering logs for coredns [5d59f1ef8605] ...
	I0708 13:11:36.613220    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d59f1ef8605"
	I0708 13:11:36.625058    4087 logs.go:123] Gathering logs for kube-controller-manager [221b47a8d8b7] ...
	I0708 13:11:36.625070    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221b47a8d8b7"
	I0708 13:11:36.642556    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:11:36.642567    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:11:36.654398    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:11:36.654410    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:11:36.692763    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:11:36.692770    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:11:36.738324    4087 logs.go:123] Gathering logs for kube-apiserver [f1a2ddf0aafe] ...
	I0708 13:11:36.738336    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1a2ddf0aafe"
	I0708 13:11:36.752745    4087 logs.go:123] Gathering logs for etcd [394d12d0e434] ...
	I0708 13:11:36.752759    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 394d12d0e434"
	I0708 13:11:36.766414    4087 logs.go:123] Gathering logs for kube-proxy [2ae0eece5059] ...
	I0708 13:11:36.766425    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ae0eece5059"
	I0708 13:11:36.778018    4087 logs.go:123] Gathering logs for coredns [a3a5c7f9cd83] ...
	I0708 13:11:36.778029    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a5c7f9cd83"
	I0708 13:11:36.792450    4087 logs.go:123] Gathering logs for kube-scheduler [1820d067a412] ...
	I0708 13:11:36.792462    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1820d067a412"
	I0708 13:11:36.807224    4087 logs.go:123] Gathering logs for storage-provisioner [4d826cf7702d] ...
	I0708 13:11:36.807235    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d826cf7702d"
	I0708 13:11:36.818814    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:11:36.818824    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:11:36.823752    4087 logs.go:123] Gathering logs for coredns [08c18ccb67ad] ...
	I0708 13:11:36.823759    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c18ccb67ad"
	I0708 13:11:36.835206    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:11:36.835217    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:11:39.362945    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:11:44.365714    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:11:44.366149    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:11:44.408585    4087 logs.go:276] 1 containers: [f1a2ddf0aafe]
	I0708 13:11:44.408720    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:11:44.429428    4087 logs.go:276] 1 containers: [394d12d0e434]
	I0708 13:11:44.429534    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:11:44.444605    4087 logs.go:276] 4 containers: [3a5ddcecbef8 5d59f1ef8605 08c18ccb67ad a3a5c7f9cd83]
	I0708 13:11:44.444677    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:11:44.456799    4087 logs.go:276] 1 containers: [1820d067a412]
	I0708 13:11:44.456880    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:11:44.468088    4087 logs.go:276] 1 containers: [2ae0eece5059]
	I0708 13:11:44.468150    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:11:44.478487    4087 logs.go:276] 1 containers: [221b47a8d8b7]
	I0708 13:11:44.478553    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:11:44.488618    4087 logs.go:276] 0 containers: []
	W0708 13:11:44.488629    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:11:44.488679    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:11:44.501741    4087 logs.go:276] 1 containers: [4d826cf7702d]
	I0708 13:11:44.501759    4087 logs.go:123] Gathering logs for etcd [394d12d0e434] ...
	I0708 13:11:44.501763    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 394d12d0e434"
	I0708 13:11:44.519954    4087 logs.go:123] Gathering logs for coredns [5d59f1ef8605] ...
	I0708 13:11:44.519966    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d59f1ef8605"
	I0708 13:11:44.532117    4087 logs.go:123] Gathering logs for coredns [08c18ccb67ad] ...
	I0708 13:11:44.532131    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c18ccb67ad"
	I0708 13:11:44.543583    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:11:44.543596    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:11:44.548312    4087 logs.go:123] Gathering logs for kube-proxy [2ae0eece5059] ...
	I0708 13:11:44.548321    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ae0eece5059"
	I0708 13:11:44.561756    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:11:44.561766    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:11:44.596421    4087 logs.go:123] Gathering logs for coredns [a3a5c7f9cd83] ...
	I0708 13:11:44.596430    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a5c7f9cd83"
	I0708 13:11:44.611928    4087 logs.go:123] Gathering logs for kube-scheduler [1820d067a412] ...
	I0708 13:11:44.611941    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1820d067a412"
	I0708 13:11:44.627759    4087 logs.go:123] Gathering logs for kube-controller-manager [221b47a8d8b7] ...
	I0708 13:11:44.627769    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221b47a8d8b7"
	I0708 13:11:44.647035    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:11:44.647044    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:11:44.672932    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:11:44.672941    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:11:44.710666    4087 logs.go:123] Gathering logs for coredns [3a5ddcecbef8] ...
	I0708 13:11:44.710676    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a5ddcecbef8"
	I0708 13:11:44.722722    4087 logs.go:123] Gathering logs for storage-provisioner [4d826cf7702d] ...
	I0708 13:11:44.722736    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d826cf7702d"
	I0708 13:11:44.734486    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:11:44.734498    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:11:44.748521    4087 logs.go:123] Gathering logs for kube-apiserver [f1a2ddf0aafe] ...
	I0708 13:11:44.748535    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1a2ddf0aafe"
	I0708 13:11:47.268051    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:11:52.268775    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:11:52.268895    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:11:52.280771    4087 logs.go:276] 1 containers: [f1a2ddf0aafe]
	I0708 13:11:52.280842    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:11:52.291986    4087 logs.go:276] 1 containers: [394d12d0e434]
	I0708 13:11:52.292069    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:11:52.303662    4087 logs.go:276] 4 containers: [3a5ddcecbef8 5d59f1ef8605 08c18ccb67ad a3a5c7f9cd83]
	I0708 13:11:52.303713    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:11:52.314924    4087 logs.go:276] 1 containers: [1820d067a412]
	I0708 13:11:52.315000    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:11:52.326575    4087 logs.go:276] 1 containers: [2ae0eece5059]
	I0708 13:11:52.326647    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:11:52.338063    4087 logs.go:276] 1 containers: [221b47a8d8b7]
	I0708 13:11:52.338138    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:11:52.349325    4087 logs.go:276] 0 containers: []
	W0708 13:11:52.349338    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:11:52.349414    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:11:52.360200    4087 logs.go:276] 1 containers: [4d826cf7702d]
	I0708 13:11:52.360220    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:11:52.360226    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:11:52.400224    4087 logs.go:123] Gathering logs for etcd [394d12d0e434] ...
	I0708 13:11:52.400233    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 394d12d0e434"
	I0708 13:11:52.415614    4087 logs.go:123] Gathering logs for coredns [08c18ccb67ad] ...
	I0708 13:11:52.415624    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c18ccb67ad"
	I0708 13:11:52.428490    4087 logs.go:123] Gathering logs for storage-provisioner [4d826cf7702d] ...
	I0708 13:11:52.428504    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d826cf7702d"
	I0708 13:11:52.441663    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:11:52.441672    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:11:52.445650    4087 logs.go:123] Gathering logs for coredns [3a5ddcecbef8] ...
	I0708 13:11:52.445657    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a5ddcecbef8"
	I0708 13:11:52.457917    4087 logs.go:123] Gathering logs for kube-proxy [2ae0eece5059] ...
	I0708 13:11:52.457928    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ae0eece5059"
	I0708 13:11:52.477310    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:11:52.477320    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:11:52.501696    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:11:52.501711    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:11:52.514255    4087 logs.go:123] Gathering logs for kube-apiserver [f1a2ddf0aafe] ...
	I0708 13:11:52.514267    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1a2ddf0aafe"
	I0708 13:11:52.529427    4087 logs.go:123] Gathering logs for coredns [5d59f1ef8605] ...
	I0708 13:11:52.529439    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d59f1ef8605"
	I0708 13:11:52.542338    4087 logs.go:123] Gathering logs for coredns [a3a5c7f9cd83] ...
	I0708 13:11:52.542349    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a5c7f9cd83"
	I0708 13:11:52.556601    4087 logs.go:123] Gathering logs for kube-controller-manager [221b47a8d8b7] ...
	I0708 13:11:52.556614    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221b47a8d8b7"
	I0708 13:11:52.575230    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:11:52.575243    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:11:52.615402    4087 logs.go:123] Gathering logs for kube-scheduler [1820d067a412] ...
	I0708 13:11:52.615425    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1820d067a412"
	I0708 13:11:55.133528    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:12:00.136143    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:12:00.136355    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:12:00.153531    4087 logs.go:276] 1 containers: [f1a2ddf0aafe]
	I0708 13:12:00.153616    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:12:00.166458    4087 logs.go:276] 1 containers: [394d12d0e434]
	I0708 13:12:00.166537    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:12:00.177876    4087 logs.go:276] 4 containers: [3a5ddcecbef8 5d59f1ef8605 08c18ccb67ad a3a5c7f9cd83]
	I0708 13:12:00.177937    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:12:00.187987    4087 logs.go:276] 1 containers: [1820d067a412]
	I0708 13:12:00.188054    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:12:00.198570    4087 logs.go:276] 1 containers: [2ae0eece5059]
	I0708 13:12:00.198626    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:12:00.208848    4087 logs.go:276] 1 containers: [221b47a8d8b7]
	I0708 13:12:00.208912    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:12:00.218911    4087 logs.go:276] 0 containers: []
	W0708 13:12:00.218925    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:12:00.218986    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:12:00.229450    4087 logs.go:276] 1 containers: [4d826cf7702d]
	I0708 13:12:00.229470    4087 logs.go:123] Gathering logs for coredns [3a5ddcecbef8] ...
	I0708 13:12:00.229475    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a5ddcecbef8"
	I0708 13:12:00.241072    4087 logs.go:123] Gathering logs for coredns [08c18ccb67ad] ...
	I0708 13:12:00.241083    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c18ccb67ad"
	I0708 13:12:00.252415    4087 logs.go:123] Gathering logs for storage-provisioner [4d826cf7702d] ...
	I0708 13:12:00.252427    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d826cf7702d"
	I0708 13:12:00.263518    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:12:00.263527    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:12:00.275118    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:12:00.275129    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:12:00.312348    4087 logs.go:123] Gathering logs for kube-proxy [2ae0eece5059] ...
	I0708 13:12:00.312356    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ae0eece5059"
	I0708 13:12:00.323591    4087 logs.go:123] Gathering logs for kube-controller-manager [221b47a8d8b7] ...
	I0708 13:12:00.323604    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221b47a8d8b7"
	I0708 13:12:00.340890    4087 logs.go:123] Gathering logs for kube-apiserver [f1a2ddf0aafe] ...
	I0708 13:12:00.340899    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1a2ddf0aafe"
	I0708 13:12:00.354791    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:12:00.354802    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:12:00.388413    4087 logs.go:123] Gathering logs for coredns [a3a5c7f9cd83] ...
	I0708 13:12:00.388427    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a5c7f9cd83"
	I0708 13:12:00.401021    4087 logs.go:123] Gathering logs for kube-scheduler [1820d067a412] ...
	I0708 13:12:00.401034    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1820d067a412"
	I0708 13:12:00.415768    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:12:00.415779    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:12:00.440756    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:12:00.440763    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:12:00.444792    4087 logs.go:123] Gathering logs for coredns [5d59f1ef8605] ...
	I0708 13:12:00.444800    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d59f1ef8605"
	I0708 13:12:00.455863    4087 logs.go:123] Gathering logs for etcd [394d12d0e434] ...
	I0708 13:12:00.455875    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 394d12d0e434"
	I0708 13:12:02.972038    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:12:07.973791    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:12:07.974228    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:12:08.014246    4087 logs.go:276] 1 containers: [f1a2ddf0aafe]
	I0708 13:12:08.014375    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:12:08.034250    4087 logs.go:276] 1 containers: [394d12d0e434]
	I0708 13:12:08.034357    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:12:08.054487    4087 logs.go:276] 4 containers: [3a5ddcecbef8 5d59f1ef8605 08c18ccb67ad a3a5c7f9cd83]
	I0708 13:12:08.054564    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:12:08.081848    4087 logs.go:276] 1 containers: [1820d067a412]
	I0708 13:12:08.081918    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:12:08.094167    4087 logs.go:276] 1 containers: [2ae0eece5059]
	I0708 13:12:08.094234    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:12:08.104613    4087 logs.go:276] 1 containers: [221b47a8d8b7]
	I0708 13:12:08.104676    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:12:08.117916    4087 logs.go:276] 0 containers: []
	W0708 13:12:08.117927    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:12:08.117976    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:12:08.128435    4087 logs.go:276] 1 containers: [4d826cf7702d]
	I0708 13:12:08.128451    4087 logs.go:123] Gathering logs for kube-apiserver [f1a2ddf0aafe] ...
	I0708 13:12:08.128456    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1a2ddf0aafe"
	I0708 13:12:08.142453    4087 logs.go:123] Gathering logs for coredns [5d59f1ef8605] ...
	I0708 13:12:08.142466    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d59f1ef8605"
	I0708 13:12:08.155036    4087 logs.go:123] Gathering logs for kube-scheduler [1820d067a412] ...
	I0708 13:12:08.155047    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1820d067a412"
	I0708 13:12:08.172569    4087 logs.go:123] Gathering logs for etcd [394d12d0e434] ...
	I0708 13:12:08.172584    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 394d12d0e434"
	I0708 13:12:08.186542    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:12:08.186551    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:12:08.210432    4087 logs.go:123] Gathering logs for storage-provisioner [4d826cf7702d] ...
	I0708 13:12:08.210439    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d826cf7702d"
	I0708 13:12:08.222137    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:12:08.222148    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:12:08.235626    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:12:08.235641    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:12:08.274514    4087 logs.go:123] Gathering logs for coredns [3a5ddcecbef8] ...
	I0708 13:12:08.274524    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a5ddcecbef8"
	I0708 13:12:08.287173    4087 logs.go:123] Gathering logs for coredns [08c18ccb67ad] ...
	I0708 13:12:08.287185    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c18ccb67ad"
	I0708 13:12:08.298286    4087 logs.go:123] Gathering logs for kube-proxy [2ae0eece5059] ...
	I0708 13:12:08.298299    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ae0eece5059"
	I0708 13:12:08.309543    4087 logs.go:123] Gathering logs for kube-controller-manager [221b47a8d8b7] ...
	I0708 13:12:08.309555    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221b47a8d8b7"
	I0708 13:12:08.326714    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:12:08.326726    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:12:08.331353    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:12:08.331360    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:12:08.364985    4087 logs.go:123] Gathering logs for coredns [a3a5c7f9cd83] ...
	I0708 13:12:08.364997    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a5c7f9cd83"
	I0708 13:12:10.879566    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:12:15.880127    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:12:15.880214    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:12:15.892323    4087 logs.go:276] 1 containers: [f1a2ddf0aafe]
	I0708 13:12:15.892400    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:12:15.904780    4087 logs.go:276] 1 containers: [394d12d0e434]
	I0708 13:12:15.904846    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:12:15.916363    4087 logs.go:276] 4 containers: [3a5ddcecbef8 5d59f1ef8605 08c18ccb67ad a3a5c7f9cd83]
	I0708 13:12:15.916540    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:12:15.927937    4087 logs.go:276] 1 containers: [1820d067a412]
	I0708 13:12:15.927990    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:12:15.939778    4087 logs.go:276] 1 containers: [2ae0eece5059]
	I0708 13:12:15.939852    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:12:15.951560    4087 logs.go:276] 1 containers: [221b47a8d8b7]
	I0708 13:12:15.951632    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:12:15.963250    4087 logs.go:276] 0 containers: []
	W0708 13:12:15.963261    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:12:15.963305    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:12:15.974985    4087 logs.go:276] 1 containers: [4d826cf7702d]
	I0708 13:12:15.975005    4087 logs.go:123] Gathering logs for kube-apiserver [f1a2ddf0aafe] ...
	I0708 13:12:15.975010    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1a2ddf0aafe"
	I0708 13:12:15.991178    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:12:15.991190    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:12:16.003454    4087 logs.go:123] Gathering logs for coredns [3a5ddcecbef8] ...
	I0708 13:12:16.003466    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a5ddcecbef8"
	I0708 13:12:16.016234    4087 logs.go:123] Gathering logs for coredns [08c18ccb67ad] ...
	I0708 13:12:16.016250    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c18ccb67ad"
	I0708 13:12:16.028931    4087 logs.go:123] Gathering logs for coredns [a3a5c7f9cd83] ...
	I0708 13:12:16.028944    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a5c7f9cd83"
	I0708 13:12:16.042643    4087 logs.go:123] Gathering logs for kube-scheduler [1820d067a412] ...
	I0708 13:12:16.042654    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1820d067a412"
	I0708 13:12:16.058679    4087 logs.go:123] Gathering logs for kube-proxy [2ae0eece5059] ...
	I0708 13:12:16.058692    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ae0eece5059"
	I0708 13:12:16.071539    4087 logs.go:123] Gathering logs for storage-provisioner [4d826cf7702d] ...
	I0708 13:12:16.071554    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d826cf7702d"
	I0708 13:12:16.086415    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:12:16.086428    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:12:16.091582    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:12:16.091595    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:12:16.130694    4087 logs.go:123] Gathering logs for kube-controller-manager [221b47a8d8b7] ...
	I0708 13:12:16.130708    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221b47a8d8b7"
	I0708 13:12:16.151009    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:12:16.151024    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:12:16.178164    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:12:16.178175    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:12:16.217511    4087 logs.go:123] Gathering logs for coredns [5d59f1ef8605] ...
	I0708 13:12:16.217526    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d59f1ef8605"
	I0708 13:12:16.238570    4087 logs.go:123] Gathering logs for etcd [394d12d0e434] ...
	I0708 13:12:16.238581    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 394d12d0e434"
	I0708 13:12:18.756672    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:12:23.759165    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:12:23.759380    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:12:23.782942    4087 logs.go:276] 1 containers: [f1a2ddf0aafe]
	I0708 13:12:23.783060    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:12:23.799108    4087 logs.go:276] 1 containers: [394d12d0e434]
	I0708 13:12:23.799179    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:12:23.811598    4087 logs.go:276] 4 containers: [3a5ddcecbef8 5d59f1ef8605 08c18ccb67ad a3a5c7f9cd83]
	I0708 13:12:23.811670    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:12:23.822467    4087 logs.go:276] 1 containers: [1820d067a412]
	I0708 13:12:23.822538    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:12:23.836672    4087 logs.go:276] 1 containers: [2ae0eece5059]
	I0708 13:12:23.836744    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:12:23.846598    4087 logs.go:276] 1 containers: [221b47a8d8b7]
	I0708 13:12:23.846664    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:12:23.856615    4087 logs.go:276] 0 containers: []
	W0708 13:12:23.856629    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:12:23.856690    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:12:23.867073    4087 logs.go:276] 1 containers: [4d826cf7702d]
	I0708 13:12:23.867091    4087 logs.go:123] Gathering logs for kube-apiserver [f1a2ddf0aafe] ...
	I0708 13:12:23.867096    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1a2ddf0aafe"
	I0708 13:12:23.881663    4087 logs.go:123] Gathering logs for coredns [08c18ccb67ad] ...
	I0708 13:12:23.881676    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c18ccb67ad"
	I0708 13:12:23.892867    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:12:23.892878    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:12:23.917451    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:12:23.917459    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:12:23.921493    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:12:23.921501    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:12:23.960253    4087 logs.go:123] Gathering logs for coredns [5d59f1ef8605] ...
	I0708 13:12:23.960264    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d59f1ef8605"
	I0708 13:12:23.972268    4087 logs.go:123] Gathering logs for coredns [a3a5c7f9cd83] ...
	I0708 13:12:23.972281    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a5c7f9cd83"
	I0708 13:12:23.985083    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:12:23.985097    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:12:23.996832    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:12:23.996843    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:12:24.032903    4087 logs.go:123] Gathering logs for coredns [3a5ddcecbef8] ...
	I0708 13:12:24.032910    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a5ddcecbef8"
	I0708 13:12:24.044266    4087 logs.go:123] Gathering logs for kube-controller-manager [221b47a8d8b7] ...
	I0708 13:12:24.044278    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221b47a8d8b7"
	I0708 13:12:24.066269    4087 logs.go:123] Gathering logs for etcd [394d12d0e434] ...
	I0708 13:12:24.066278    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 394d12d0e434"
	I0708 13:12:24.079794    4087 logs.go:123] Gathering logs for kube-scheduler [1820d067a412] ...
	I0708 13:12:24.079802    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1820d067a412"
	I0708 13:12:24.094758    4087 logs.go:123] Gathering logs for kube-proxy [2ae0eece5059] ...
	I0708 13:12:24.094771    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ae0eece5059"
	I0708 13:12:24.106440    4087 logs.go:123] Gathering logs for storage-provisioner [4d826cf7702d] ...
	I0708 13:12:24.106451    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d826cf7702d"
	I0708 13:12:26.619869    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:12:31.622360    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:12:31.622578    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:12:31.646322    4087 logs.go:276] 1 containers: [f1a2ddf0aafe]
	I0708 13:12:31.646432    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:12:31.662462    4087 logs.go:276] 1 containers: [394d12d0e434]
	I0708 13:12:31.662540    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:12:31.675584    4087 logs.go:276] 4 containers: [3a5ddcecbef8 5d59f1ef8605 08c18ccb67ad a3a5c7f9cd83]
	I0708 13:12:31.675659    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:12:31.687198    4087 logs.go:276] 1 containers: [1820d067a412]
	I0708 13:12:31.687257    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:12:31.697382    4087 logs.go:276] 1 containers: [2ae0eece5059]
	I0708 13:12:31.697448    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:12:31.713014    4087 logs.go:276] 1 containers: [221b47a8d8b7]
	I0708 13:12:31.713073    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:12:31.723655    4087 logs.go:276] 0 containers: []
	W0708 13:12:31.723668    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:12:31.723720    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:12:31.734099    4087 logs.go:276] 1 containers: [4d826cf7702d]
	I0708 13:12:31.734116    4087 logs.go:123] Gathering logs for kube-apiserver [f1a2ddf0aafe] ...
	I0708 13:12:31.734121    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1a2ddf0aafe"
	I0708 13:12:31.748365    4087 logs.go:123] Gathering logs for etcd [394d12d0e434] ...
	I0708 13:12:31.748377    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 394d12d0e434"
	I0708 13:12:31.762714    4087 logs.go:123] Gathering logs for kube-proxy [2ae0eece5059] ...
	I0708 13:12:31.762726    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ae0eece5059"
	I0708 13:12:31.774399    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:12:31.774410    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:12:31.799905    4087 logs.go:123] Gathering logs for kube-controller-manager [221b47a8d8b7] ...
	I0708 13:12:31.799911    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221b47a8d8b7"
	I0708 13:12:31.825635    4087 logs.go:123] Gathering logs for storage-provisioner [4d826cf7702d] ...
	I0708 13:12:31.825644    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d826cf7702d"
	I0708 13:12:31.836932    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:12:31.836941    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:12:31.848313    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:12:31.848322    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:12:31.884444    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:12:31.884453    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:12:31.888429    4087 logs.go:123] Gathering logs for coredns [5d59f1ef8605] ...
	I0708 13:12:31.888438    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d59f1ef8605"
	I0708 13:12:31.899634    4087 logs.go:123] Gathering logs for coredns [08c18ccb67ad] ...
	I0708 13:12:31.899645    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c18ccb67ad"
	I0708 13:12:31.911184    4087 logs.go:123] Gathering logs for kube-scheduler [1820d067a412] ...
	I0708 13:12:31.911194    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1820d067a412"
	I0708 13:12:31.926070    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:12:31.926083    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:12:31.963117    4087 logs.go:123] Gathering logs for coredns [3a5ddcecbef8] ...
	I0708 13:12:31.963129    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a5ddcecbef8"
	I0708 13:12:31.976135    4087 logs.go:123] Gathering logs for coredns [a3a5c7f9cd83] ...
	I0708 13:12:31.976148    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a5c7f9cd83"
	I0708 13:12:34.495417    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:12:39.498044    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:12:39.498489    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:12:39.543094    4087 logs.go:276] 1 containers: [f1a2ddf0aafe]
	I0708 13:12:39.543233    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:12:39.564995    4087 logs.go:276] 1 containers: [394d12d0e434]
	I0708 13:12:39.565078    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:12:39.579347    4087 logs.go:276] 4 containers: [3a5ddcecbef8 5d59f1ef8605 08c18ccb67ad a3a5c7f9cd83]
	I0708 13:12:39.579419    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:12:39.591287    4087 logs.go:276] 1 containers: [1820d067a412]
	I0708 13:12:39.591355    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:12:39.605512    4087 logs.go:276] 1 containers: [2ae0eece5059]
	I0708 13:12:39.605579    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:12:39.615855    4087 logs.go:276] 1 containers: [221b47a8d8b7]
	I0708 13:12:39.615920    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:12:39.625888    4087 logs.go:276] 0 containers: []
	W0708 13:12:39.625898    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:12:39.625947    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:12:39.636213    4087 logs.go:276] 1 containers: [4d826cf7702d]
	I0708 13:12:39.636228    4087 logs.go:123] Gathering logs for kube-proxy [2ae0eece5059] ...
	I0708 13:12:39.636233    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ae0eece5059"
	I0708 13:12:39.647363    4087 logs.go:123] Gathering logs for storage-provisioner [4d826cf7702d] ...
	I0708 13:12:39.647373    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d826cf7702d"
	I0708 13:12:39.659854    4087 logs.go:123] Gathering logs for etcd [394d12d0e434] ...
	I0708 13:12:39.659868    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 394d12d0e434"
	I0708 13:12:39.673782    4087 logs.go:123] Gathering logs for coredns [3a5ddcecbef8] ...
	I0708 13:12:39.673794    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a5ddcecbef8"
	I0708 13:12:39.685695    4087 logs.go:123] Gathering logs for coredns [5d59f1ef8605] ...
	I0708 13:12:39.685703    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d59f1ef8605"
	I0708 13:12:39.697087    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:12:39.697097    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:12:39.709083    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:12:39.709094    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:12:39.746772    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:12:39.746795    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:12:39.783933    4087 logs.go:123] Gathering logs for kube-controller-manager [221b47a8d8b7] ...
	I0708 13:12:39.783945    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221b47a8d8b7"
	I0708 13:12:39.807592    4087 logs.go:123] Gathering logs for kube-apiserver [f1a2ddf0aafe] ...
	I0708 13:12:39.807607    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1a2ddf0aafe"
	I0708 13:12:39.823712    4087 logs.go:123] Gathering logs for coredns [08c18ccb67ad] ...
	I0708 13:12:39.823724    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c18ccb67ad"
	I0708 13:12:39.836394    4087 logs.go:123] Gathering logs for kube-scheduler [1820d067a412] ...
	I0708 13:12:39.836406    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1820d067a412"
	I0708 13:12:39.856955    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:12:39.856967    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:12:39.861550    4087 logs.go:123] Gathering logs for coredns [a3a5c7f9cd83] ...
	I0708 13:12:39.861561    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a5c7f9cd83"
	I0708 13:12:39.874429    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:12:39.874441    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:12:42.400614    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:12:47.402805    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:12:47.403244    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:12:47.448647    4087 logs.go:276] 1 containers: [f1a2ddf0aafe]
	I0708 13:12:47.448760    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:12:47.469023    4087 logs.go:276] 1 containers: [394d12d0e434]
	I0708 13:12:47.469108    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:12:47.484189    4087 logs.go:276] 4 containers: [3a5ddcecbef8 5d59f1ef8605 08c18ccb67ad a3a5c7f9cd83]
	I0708 13:12:47.484265    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:12:47.495001    4087 logs.go:276] 1 containers: [1820d067a412]
	I0708 13:12:47.495069    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:12:47.505550    4087 logs.go:276] 1 containers: [2ae0eece5059]
	I0708 13:12:47.505619    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:12:47.516629    4087 logs.go:276] 1 containers: [221b47a8d8b7]
	I0708 13:12:47.516700    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:12:47.527982    4087 logs.go:276] 0 containers: []
	W0708 13:12:47.527994    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:12:47.528056    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:12:47.541031    4087 logs.go:276] 1 containers: [4d826cf7702d]
	I0708 13:12:47.541048    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:12:47.541053    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:12:47.576514    4087 logs.go:123] Gathering logs for kube-apiserver [f1a2ddf0aafe] ...
	I0708 13:12:47.576525    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1a2ddf0aafe"
	I0708 13:12:47.590604    4087 logs.go:123] Gathering logs for etcd [394d12d0e434] ...
	I0708 13:12:47.590615    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 394d12d0e434"
	I0708 13:12:47.605080    4087 logs.go:123] Gathering logs for kube-proxy [2ae0eece5059] ...
	I0708 13:12:47.605090    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ae0eece5059"
	I0708 13:12:47.617058    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:12:47.617067    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:12:47.657001    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:12:47.657012    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:12:47.663715    4087 logs.go:123] Gathering logs for coredns [3a5ddcecbef8] ...
	I0708 13:12:47.663723    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a5ddcecbef8"
	I0708 13:12:47.676936    4087 logs.go:123] Gathering logs for storage-provisioner [4d826cf7702d] ...
	I0708 13:12:47.676947    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d826cf7702d"
	I0708 13:12:47.688732    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:12:47.688743    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:12:47.713533    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:12:47.713541    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:12:47.726702    4087 logs.go:123] Gathering logs for kube-scheduler [1820d067a412] ...
	I0708 13:12:47.726713    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1820d067a412"
	I0708 13:12:47.743180    4087 logs.go:123] Gathering logs for coredns [5d59f1ef8605] ...
	I0708 13:12:47.743190    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d59f1ef8605"
	I0708 13:12:47.755106    4087 logs.go:123] Gathering logs for coredns [08c18ccb67ad] ...
	I0708 13:12:47.755128    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c18ccb67ad"
	I0708 13:12:47.767389    4087 logs.go:123] Gathering logs for coredns [a3a5c7f9cd83] ...
	I0708 13:12:47.767400    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a5c7f9cd83"
	I0708 13:12:47.780561    4087 logs.go:123] Gathering logs for kube-controller-manager [221b47a8d8b7] ...
	I0708 13:12:47.780575    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221b47a8d8b7"
	I0708 13:12:50.299946    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:12:55.302592    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:12:55.302945    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:12:55.337162    4087 logs.go:276] 1 containers: [f1a2ddf0aafe]
	I0708 13:12:55.337284    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:12:55.356737    4087 logs.go:276] 1 containers: [394d12d0e434]
	I0708 13:12:55.356823    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:12:55.371556    4087 logs.go:276] 4 containers: [3a5ddcecbef8 5d59f1ef8605 08c18ccb67ad a3a5c7f9cd83]
	I0708 13:12:55.371632    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:12:55.383398    4087 logs.go:276] 1 containers: [1820d067a412]
	I0708 13:12:55.383469    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:12:55.401528    4087 logs.go:276] 1 containers: [2ae0eece5059]
	I0708 13:12:55.401600    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:12:55.412211    4087 logs.go:276] 1 containers: [221b47a8d8b7]
	I0708 13:12:55.412276    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:12:55.422835    4087 logs.go:276] 0 containers: []
	W0708 13:12:55.422848    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:12:55.422900    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:12:55.435346    4087 logs.go:276] 1 containers: [4d826cf7702d]
	I0708 13:12:55.435365    4087 logs.go:123] Gathering logs for coredns [a3a5c7f9cd83] ...
	I0708 13:12:55.435370    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3a5c7f9cd83"
	I0708 13:12:55.448311    4087 logs.go:123] Gathering logs for storage-provisioner [4d826cf7702d] ...
	I0708 13:12:55.448324    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d826cf7702d"
	I0708 13:12:55.459501    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:12:55.459514    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:12:55.482425    4087 logs.go:123] Gathering logs for kube-proxy [2ae0eece5059] ...
	I0708 13:12:55.482434    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ae0eece5059"
	I0708 13:12:55.494602    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:12:55.494613    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:12:55.531798    4087 logs.go:123] Gathering logs for etcd [394d12d0e434] ...
	I0708 13:12:55.531809    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 394d12d0e434"
	I0708 13:12:55.546469    4087 logs.go:123] Gathering logs for coredns [3a5ddcecbef8] ...
	I0708 13:12:55.546481    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a5ddcecbef8"
	I0708 13:12:55.558688    4087 logs.go:123] Gathering logs for coredns [08c18ccb67ad] ...
	I0708 13:12:55.558701    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c18ccb67ad"
	I0708 13:12:55.570295    4087 logs.go:123] Gathering logs for kube-controller-manager [221b47a8d8b7] ...
	I0708 13:12:55.570307    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221b47a8d8b7"
	I0708 13:12:55.588158    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:12:55.588167    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:12:55.599859    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:12:55.599873    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:12:55.604024    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:12:55.604039    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:12:55.638108    4087 logs.go:123] Gathering logs for coredns [5d59f1ef8605] ...
	I0708 13:12:55.638119    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d59f1ef8605"
	I0708 13:12:55.649766    4087 logs.go:123] Gathering logs for kube-apiserver [f1a2ddf0aafe] ...
	I0708 13:12:55.649777    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1a2ddf0aafe"
	I0708 13:12:55.671801    4087 logs.go:123] Gathering logs for kube-scheduler [1820d067a412] ...
	I0708 13:12:55.671813    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1820d067a412"
	I0708 13:12:58.191101    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:13:03.193743    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:13:03.194121    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0708 13:13:03.227420    4087 logs.go:276] 1 containers: [f1a2ddf0aafe]
	I0708 13:13:03.227535    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0708 13:13:03.246712    4087 logs.go:276] 1 containers: [394d12d0e434]
	I0708 13:13:03.246802    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0708 13:13:03.261238    4087 logs.go:276] 4 containers: [cf9a4e6ec101 62f64e2af710 3a5ddcecbef8 5d59f1ef8605]
	I0708 13:13:03.261309    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0708 13:13:03.276913    4087 logs.go:276] 1 containers: [1820d067a412]
	I0708 13:13:03.276989    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0708 13:13:03.287643    4087 logs.go:276] 1 containers: [2ae0eece5059]
	I0708 13:13:03.287713    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0708 13:13:03.302912    4087 logs.go:276] 1 containers: [221b47a8d8b7]
	I0708 13:13:03.302979    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0708 13:13:03.313400    4087 logs.go:276] 0 containers: []
	W0708 13:13:03.313412    4087 logs.go:278] No container was found matching "kindnet"
	I0708 13:13:03.313465    4087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0708 13:13:03.323830    4087 logs.go:276] 1 containers: [4d826cf7702d]
	I0708 13:13:03.323845    4087 logs.go:123] Gathering logs for dmesg ...
	I0708 13:13:03.323850    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 13:13:03.327994    4087 logs.go:123] Gathering logs for coredns [5d59f1ef8605] ...
	I0708 13:13:03.328003    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d59f1ef8605"
	I0708 13:13:03.345447    4087 logs.go:123] Gathering logs for kube-proxy [2ae0eece5059] ...
	I0708 13:13:03.345456    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ae0eece5059"
	I0708 13:13:03.359263    4087 logs.go:123] Gathering logs for Docker ...
	I0708 13:13:03.359275    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0708 13:13:03.383235    4087 logs.go:123] Gathering logs for kubelet ...
	I0708 13:13:03.383243    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0708 13:13:03.421616    4087 logs.go:123] Gathering logs for kube-apiserver [f1a2ddf0aafe] ...
	I0708 13:13:03.421624    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1a2ddf0aafe"
	I0708 13:13:03.435402    4087 logs.go:123] Gathering logs for coredns [cf9a4e6ec101] ...
	I0708 13:13:03.435411    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf9a4e6ec101"
	I0708 13:13:03.446166    4087 logs.go:123] Gathering logs for coredns [62f64e2af710] ...
	I0708 13:13:03.446178    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62f64e2af710"
	I0708 13:13:03.457425    4087 logs.go:123] Gathering logs for kube-controller-manager [221b47a8d8b7] ...
	I0708 13:13:03.457437    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221b47a8d8b7"
	I0708 13:13:03.478556    4087 logs.go:123] Gathering logs for container status ...
	I0708 13:13:03.478565    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 13:13:03.490221    4087 logs.go:123] Gathering logs for describe nodes ...
	I0708 13:13:03.490234    4087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 13:13:03.528101    4087 logs.go:123] Gathering logs for storage-provisioner [4d826cf7702d] ...
	I0708 13:13:03.528114    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d826cf7702d"
	I0708 13:13:03.541714    4087 logs.go:123] Gathering logs for coredns [3a5ddcecbef8] ...
	I0708 13:13:03.541727    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a5ddcecbef8"
	I0708 13:13:03.553607    4087 logs.go:123] Gathering logs for kube-scheduler [1820d067a412] ...
	I0708 13:13:03.553619    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1820d067a412"
	I0708 13:13:03.568648    4087 logs.go:123] Gathering logs for etcd [394d12d0e434] ...
	I0708 13:13:03.568662    4087 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 394d12d0e434"
	I0708 13:13:06.085276    4087 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0708 13:13:11.086670    4087 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0708 13:13:11.091667    4087 out.go:177] 
	W0708 13:13:11.099986    4087 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0708 13:13:11.100014    4087 out.go:239] * 
	* 
	W0708 13:13:11.101440    4087 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 13:13:11.111605    4087 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-170000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (573.08s)

                                                
                                    
x
+
TestPause/serial/Start (10.08s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-323000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-323000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (10.011580042s)

                                                
                                                
-- stdout --
	* [pause-323000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-323000" primary control-plane node in "pause-323000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-323000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-323000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-323000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-323000 -n pause-323000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-323000 -n pause-323000: exit status 7 (62.730667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-323000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (10.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-088000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-088000 --driver=qemu2 : exit status 80 (10.044163416s)

                                                
                                                
-- stdout --
	* [NoKubernetes-088000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-088000" primary control-plane node in "NoKubernetes-088000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-088000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-088000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-088000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-088000 -n NoKubernetes-088000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-088000 -n NoKubernetes-088000: exit status 7 (66.290625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-088000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (10.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-088000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-088000 --no-kubernetes --driver=qemu2 : exit status 80 (5.238008125s)

                                                
                                                
-- stdout --
	* [NoKubernetes-088000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-088000
	* Restarting existing qemu2 VM for "NoKubernetes-088000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-088000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-088000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-088000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-088000 -n NoKubernetes-088000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-088000 -n NoKubernetes-088000: exit status 7 (52.186583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-088000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-088000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-088000 --no-kubernetes --driver=qemu2 : exit status 80 (5.232407708s)

                                                
                                                
-- stdout --
	* [NoKubernetes-088000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-088000
	* Restarting existing qemu2 VM for "NoKubernetes-088000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-088000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-088000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-088000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-088000 -n NoKubernetes-088000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-088000 -n NoKubernetes-088000: exit status 7 (32.799333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-088000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-088000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-088000 --driver=qemu2 : exit status 80 (5.269954958s)

                                                
                                                
-- stdout --
	* [NoKubernetes-088000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-088000
	* Restarting existing qemu2 VM for "NoKubernetes-088000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-088000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-088000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-088000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-088000 -n NoKubernetes-088000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-088000 -n NoKubernetes-088000: exit status 7 (39.232833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-088000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-305000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-305000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.737323125s)

                                                
                                                
-- stdout --
	* [auto-305000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-305000" primary control-plane node in "auto-305000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-305000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 13:11:29.860446    4349 out.go:291] Setting OutFile to fd 1 ...
	I0708 13:11:29.860592    4349 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:11:29.860595    4349 out.go:304] Setting ErrFile to fd 2...
	I0708 13:11:29.860597    4349 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:11:29.860734    4349 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 13:11:29.861854    4349 out.go:298] Setting JSON to false
	I0708 13:11:29.878255    4349 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4257,"bootTime":1720465232,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0708 13:11:29.878314    4349 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0708 13:11:29.884828    4349 out.go:177] * [auto-305000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0708 13:11:29.893032    4349 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 13:11:29.893071    4349 notify.go:220] Checking for updates...
	I0708 13:11:29.899896    4349 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 13:11:29.902984    4349 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0708 13:11:29.905863    4349 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 13:11:29.908925    4349 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	I0708 13:11:29.911958    4349 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 13:11:29.915289    4349 config.go:182] Loaded profile config "multinode-969000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 13:11:29.915355    4349 config.go:182] Loaded profile config "stopped-upgrade-170000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0708 13:11:29.915399    4349 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 13:11:29.919959    4349 out.go:177] * Using the qemu2 driver based on user configuration
	I0708 13:11:29.926883    4349 start.go:297] selected driver: qemu2
	I0708 13:11:29.926894    4349 start.go:901] validating driver "qemu2" against <nil>
	I0708 13:11:29.926901    4349 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 13:11:29.929333    4349 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0708 13:11:29.931939    4349 out.go:177] * Automatically selected the socket_vmnet network
	I0708 13:11:29.935110    4349 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 13:11:29.935141    4349 cni.go:84] Creating CNI manager for ""
	I0708 13:11:29.935149    4349 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0708 13:11:29.935156    4349 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0708 13:11:29.935189    4349 start.go:340] cluster config:
	{Name:auto-305000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:auto-305000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 13:11:29.939104    4349 iso.go:125] acquiring lock: {Name:mk0270d312faa6a295feea241390baaf586d8510 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 13:11:29.945954    4349 out.go:177] * Starting "auto-305000" primary control-plane node in "auto-305000" cluster
	I0708 13:11:29.949952    4349 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0708 13:11:29.949968    4349 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0708 13:11:29.949978    4349 cache.go:56] Caching tarball of preloaded images
	I0708 13:11:29.950055    4349 preload.go:173] Found /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0708 13:11:29.950060    4349 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0708 13:11:29.950121    4349 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/auto-305000/config.json ...
	I0708 13:11:29.950133    4349 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/auto-305000/config.json: {Name:mk2d61c5535443618552036bb14becf094f810fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 13:11:29.950355    4349 start.go:360] acquireMachinesLock for auto-305000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 13:11:29.950389    4349 start.go:364] duration metric: took 27.958µs to acquireMachinesLock for "auto-305000"
	I0708 13:11:29.950399    4349 start.go:93] Provisioning new machine with config: &{Name:auto-305000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.2 ClusterName:auto-305000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 13:11:29.950423    4349 start.go:125] createHost starting for "" (driver="qemu2")
	I0708 13:11:29.957970    4349 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0708 13:11:29.975621    4349 start.go:159] libmachine.API.Create for "auto-305000" (driver="qemu2")
	I0708 13:11:29.975655    4349 client.go:168] LocalClient.Create starting
	I0708 13:11:29.975717    4349 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem
	I0708 13:11:29.975751    4349 main.go:141] libmachine: Decoding PEM data...
	I0708 13:11:29.975760    4349 main.go:141] libmachine: Parsing certificate...
	I0708 13:11:29.975797    4349 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem
	I0708 13:11:29.975821    4349 main.go:141] libmachine: Decoding PEM data...
	I0708 13:11:29.975830    4349 main.go:141] libmachine: Parsing certificate...
	I0708 13:11:29.976207    4349 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19195-1270/.minikube/cache/iso/arm64/minikube-v1.33.1-1720011972-19186-arm64.iso...
	I0708 13:11:30.119804    4349 main.go:141] libmachine: Creating SSH key...
	I0708 13:11:30.181460    4349 main.go:141] libmachine: Creating Disk image...
	I0708 13:11:30.181470    4349 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0708 13:11:30.181654    4349 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/auto-305000/disk.qcow2.raw /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/auto-305000/disk.qcow2
	I0708 13:11:30.191001    4349 main.go:141] libmachine: STDOUT: 
	I0708 13:11:30.191018    4349 main.go:141] libmachine: STDERR: 
	I0708 13:11:30.191066    4349 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/auto-305000/disk.qcow2 +20000M
	I0708 13:11:30.198987    4349 main.go:141] libmachine: STDOUT: Image resized.
	
	I0708 13:11:30.198999    4349 main.go:141] libmachine: STDERR: 
	I0708 13:11:30.199031    4349 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/auto-305000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/auto-305000/disk.qcow2
	I0708 13:11:30.199036    4349 main.go:141] libmachine: Starting QEMU VM...
	I0708 13:11:30.199065    4349 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/auto-305000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/auto-305000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/auto-305000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:bf:23:74:a6:fb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/auto-305000/disk.qcow2
	I0708 13:11:30.200668    4349 main.go:141] libmachine: STDOUT: 
	I0708 13:11:30.200682    4349 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 13:11:30.200702    4349 client.go:171] duration metric: took 225.049708ms to LocalClient.Create
	I0708 13:11:32.202964    4349 start.go:128] duration metric: took 2.252567875s to createHost
	I0708 13:11:32.203088    4349 start.go:83] releasing machines lock for "auto-305000", held for 2.252763917s
	W0708 13:11:32.203147    4349 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:11:32.214408    4349 out.go:177] * Deleting "auto-305000" in qemu2 ...
	W0708 13:11:32.247079    4349 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:11:32.247125    4349 start.go:728] Will try again in 5 seconds ...
	I0708 13:11:37.249172    4349 start.go:360] acquireMachinesLock for auto-305000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 13:11:37.249559    4349 start.go:364] duration metric: took 303.917µs to acquireMachinesLock for "auto-305000"
	I0708 13:11:37.249604    4349 start.go:93] Provisioning new machine with config: &{Name:auto-305000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.2 ClusterName:auto-305000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 13:11:37.249822    4349 start.go:125] createHost starting for "" (driver="qemu2")
	I0708 13:11:37.259252    4349 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0708 13:11:37.301802    4349 start.go:159] libmachine.API.Create for "auto-305000" (driver="qemu2")
	I0708 13:11:37.301845    4349 client.go:168] LocalClient.Create starting
	I0708 13:11:37.301970    4349 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem
	I0708 13:11:37.302032    4349 main.go:141] libmachine: Decoding PEM data...
	I0708 13:11:37.302048    4349 main.go:141] libmachine: Parsing certificate...
	I0708 13:11:37.302104    4349 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem
	I0708 13:11:37.302151    4349 main.go:141] libmachine: Decoding PEM data...
	I0708 13:11:37.302162    4349 main.go:141] libmachine: Parsing certificate...
	I0708 13:11:37.302757    4349 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19195-1270/.minikube/cache/iso/arm64/minikube-v1.33.1-1720011972-19186-arm64.iso...
	I0708 13:11:37.459286    4349 main.go:141] libmachine: Creating SSH key...
	I0708 13:11:37.514972    4349 main.go:141] libmachine: Creating Disk image...
	I0708 13:11:37.514977    4349 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0708 13:11:37.515165    4349 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/auto-305000/disk.qcow2.raw /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/auto-305000/disk.qcow2
	I0708 13:11:37.524657    4349 main.go:141] libmachine: STDOUT: 
	I0708 13:11:37.524676    4349 main.go:141] libmachine: STDERR: 
	I0708 13:11:37.524720    4349 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/auto-305000/disk.qcow2 +20000M
	I0708 13:11:37.532888    4349 main.go:141] libmachine: STDOUT: Image resized.
	
	I0708 13:11:37.532908    4349 main.go:141] libmachine: STDERR: 
	I0708 13:11:37.532927    4349 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/auto-305000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/auto-305000/disk.qcow2
	I0708 13:11:37.532931    4349 main.go:141] libmachine: Starting QEMU VM...
	I0708 13:11:37.532990    4349 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/auto-305000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/auto-305000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/auto-305000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:c2:9e:64:59:96 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/auto-305000/disk.qcow2
	I0708 13:11:37.534967    4349 main.go:141] libmachine: STDOUT: 
	I0708 13:11:37.534983    4349 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 13:11:37.534995    4349 client.go:171] duration metric: took 233.151917ms to LocalClient.Create
	I0708 13:11:39.537181    4349 start.go:128] duration metric: took 2.287316s to createHost
	I0708 13:11:39.537250    4349 start.go:83] releasing machines lock for "auto-305000", held for 2.287752458s
	W0708 13:11:39.537488    4349 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-305000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-305000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:11:39.548986    4349 out.go:177] 
	W0708 13:11:39.552080    4349 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0708 13:11:39.552101    4349 out.go:239] * 
	* 
	W0708 13:11:39.553386    4349 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 13:11:39.561992    4349 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-305000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-305000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.909080959s)

                                                
                                                
-- stdout --
	* [kindnet-305000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-305000" primary control-plane node in "kindnet-305000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-305000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 13:11:41.749466    4461 out.go:291] Setting OutFile to fd 1 ...
	I0708 13:11:41.749604    4461 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:11:41.749608    4461 out.go:304] Setting ErrFile to fd 2...
	I0708 13:11:41.749610    4461 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:11:41.749734    4461 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 13:11:41.750818    4461 out.go:298] Setting JSON to false
	I0708 13:11:41.767317    4461 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4269,"bootTime":1720465232,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0708 13:11:41.767394    4461 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0708 13:11:41.774471    4461 out.go:177] * [kindnet-305000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0708 13:11:41.782279    4461 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 13:11:41.782333    4461 notify.go:220] Checking for updates...
	I0708 13:11:41.789406    4461 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 13:11:41.790814    4461 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0708 13:11:41.793489    4461 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 13:11:41.796425    4461 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	I0708 13:11:41.799427    4461 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 13:11:41.802755    4461 config.go:182] Loaded profile config "multinode-969000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 13:11:41.802816    4461 config.go:182] Loaded profile config "stopped-upgrade-170000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0708 13:11:41.802862    4461 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 13:11:41.807380    4461 out.go:177] * Using the qemu2 driver based on user configuration
	I0708 13:11:41.814407    4461 start.go:297] selected driver: qemu2
	I0708 13:11:41.814414    4461 start.go:901] validating driver "qemu2" against <nil>
	I0708 13:11:41.814420    4461 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 13:11:41.816739    4461 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0708 13:11:41.819386    4461 out.go:177] * Automatically selected the socket_vmnet network
	I0708 13:11:41.822582    4461 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 13:11:41.822628    4461 cni.go:84] Creating CNI manager for "kindnet"
	I0708 13:11:41.822632    4461 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0708 13:11:41.822664    4461 start.go:340] cluster config:
	{Name:kindnet-305000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:kindnet-305000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 13:11:41.826257    4461 iso.go:125] acquiring lock: {Name:mk0270d312faa6a295feea241390baaf586d8510 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 13:11:41.833415    4461 out.go:177] * Starting "kindnet-305000" primary control-plane node in "kindnet-305000" cluster
	I0708 13:11:41.836406    4461 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0708 13:11:41.836421    4461 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0708 13:11:41.836432    4461 cache.go:56] Caching tarball of preloaded images
	I0708 13:11:41.836515    4461 preload.go:173] Found /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0708 13:11:41.836520    4461 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0708 13:11:41.836585    4461 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/kindnet-305000/config.json ...
	I0708 13:11:41.836602    4461 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/kindnet-305000/config.json: {Name:mkd7c67dea2c4fef9e382f9aefa921aa159d7b1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 13:11:41.837102    4461 start.go:360] acquireMachinesLock for kindnet-305000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 13:11:41.837136    4461 start.go:364] duration metric: took 28.333µs to acquireMachinesLock for "kindnet-305000"
	I0708 13:11:41.837147    4461 start.go:93] Provisioning new machine with config: &{Name:kindnet-305000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:kindnet-305000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 13:11:41.837177    4461 start.go:125] createHost starting for "" (driver="qemu2")
	I0708 13:11:41.845321    4461 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0708 13:11:41.862682    4461 start.go:159] libmachine.API.Create for "kindnet-305000" (driver="qemu2")
	I0708 13:11:41.862710    4461 client.go:168] LocalClient.Create starting
	I0708 13:11:41.862776    4461 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem
	I0708 13:11:41.862805    4461 main.go:141] libmachine: Decoding PEM data...
	I0708 13:11:41.862831    4461 main.go:141] libmachine: Parsing certificate...
	I0708 13:11:41.862868    4461 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem
	I0708 13:11:41.862891    4461 main.go:141] libmachine: Decoding PEM data...
	I0708 13:11:41.862901    4461 main.go:141] libmachine: Parsing certificate...
	I0708 13:11:41.863259    4461 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19195-1270/.minikube/cache/iso/arm64/minikube-v1.33.1-1720011972-19186-arm64.iso...
	I0708 13:11:42.023488    4461 main.go:141] libmachine: Creating SSH key...
	I0708 13:11:42.096975    4461 main.go:141] libmachine: Creating Disk image...
	I0708 13:11:42.096980    4461 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0708 13:11:42.097177    4461 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kindnet-305000/disk.qcow2.raw /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kindnet-305000/disk.qcow2
	I0708 13:11:42.106857    4461 main.go:141] libmachine: STDOUT: 
	I0708 13:11:42.106882    4461 main.go:141] libmachine: STDERR: 
	I0708 13:11:42.106938    4461 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kindnet-305000/disk.qcow2 +20000M
	I0708 13:11:42.114917    4461 main.go:141] libmachine: STDOUT: Image resized.
	
	I0708 13:11:42.114935    4461 main.go:141] libmachine: STDERR: 
	I0708 13:11:42.114949    4461 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kindnet-305000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kindnet-305000/disk.qcow2
	I0708 13:11:42.114954    4461 main.go:141] libmachine: Starting QEMU VM...
	I0708 13:11:42.114987    4461 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kindnet-305000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kindnet-305000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kindnet-305000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:38:68:56:c2:da -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kindnet-305000/disk.qcow2
	I0708 13:11:42.116590    4461 main.go:141] libmachine: STDOUT: 
	I0708 13:11:42.116606    4461 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 13:11:42.116624    4461 client.go:171] duration metric: took 253.917041ms to LocalClient.Create
	I0708 13:11:44.118767    4461 start.go:128] duration metric: took 2.281636666s to createHost
	I0708 13:11:44.118858    4461 start.go:83] releasing machines lock for "kindnet-305000", held for 2.28178775s
	W0708 13:11:44.118976    4461 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:11:44.130327    4461 out.go:177] * Deleting "kindnet-305000" in qemu2 ...
	W0708 13:11:44.159738    4461 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:11:44.159771    4461 start.go:728] Will try again in 5 seconds ...
	I0708 13:11:49.161881    4461 start.go:360] acquireMachinesLock for kindnet-305000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 13:11:49.162465    4461 start.go:364] duration metric: took 467.458µs to acquireMachinesLock for "kindnet-305000"
	I0708 13:11:49.162615    4461 start.go:93] Provisioning new machine with config: &{Name:kindnet-305000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:kindnet-305000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 13:11:49.163006    4461 start.go:125] createHost starting for "" (driver="qemu2")
	I0708 13:11:49.168718    4461 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0708 13:11:49.213861    4461 start.go:159] libmachine.API.Create for "kindnet-305000" (driver="qemu2")
	I0708 13:11:49.213906    4461 client.go:168] LocalClient.Create starting
	I0708 13:11:49.214044    4461 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem
	I0708 13:11:49.214123    4461 main.go:141] libmachine: Decoding PEM data...
	I0708 13:11:49.214139    4461 main.go:141] libmachine: Parsing certificate...
	I0708 13:11:49.214207    4461 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem
	I0708 13:11:49.214253    4461 main.go:141] libmachine: Decoding PEM data...
	I0708 13:11:49.214268    4461 main.go:141] libmachine: Parsing certificate...
	I0708 13:11:49.214788    4461 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19195-1270/.minikube/cache/iso/arm64/minikube-v1.33.1-1720011972-19186-arm64.iso...
	I0708 13:11:49.366492    4461 main.go:141] libmachine: Creating SSH key...
	I0708 13:11:49.573540    4461 main.go:141] libmachine: Creating Disk image...
	I0708 13:11:49.573550    4461 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0708 13:11:49.573791    4461 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kindnet-305000/disk.qcow2.raw /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kindnet-305000/disk.qcow2
	I0708 13:11:49.583813    4461 main.go:141] libmachine: STDOUT: 
	I0708 13:11:49.583837    4461 main.go:141] libmachine: STDERR: 
	I0708 13:11:49.583914    4461 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kindnet-305000/disk.qcow2 +20000M
	I0708 13:11:49.592365    4461 main.go:141] libmachine: STDOUT: Image resized.
	
	I0708 13:11:49.592377    4461 main.go:141] libmachine: STDERR: 
	I0708 13:11:49.592392    4461 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kindnet-305000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kindnet-305000/disk.qcow2
	I0708 13:11:49.592399    4461 main.go:141] libmachine: Starting QEMU VM...
	I0708 13:11:49.592430    4461 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kindnet-305000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kindnet-305000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kindnet-305000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:c2:11:98:f0:fd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kindnet-305000/disk.qcow2
	I0708 13:11:49.594110    4461 main.go:141] libmachine: STDOUT: 
	I0708 13:11:49.594125    4461 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 13:11:49.594138    4461 client.go:171] duration metric: took 380.238083ms to LocalClient.Create
	I0708 13:11:51.596177    4461 start.go:128] duration metric: took 2.433222625s to createHost
	I0708 13:11:51.596202    4461 start.go:83] releasing machines lock for "kindnet-305000", held for 2.433780708s
	W0708 13:11:51.596371    4461 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-305000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-305000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:11:51.604577    4461 out.go:177] 
	W0708 13:11:51.611739    4461 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0708 13:11:51.611745    4461 out.go:239] * 
	* 
	W0708 13:11:51.612290    4461 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 13:11:51.621703    4461 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-305000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-305000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.763730458s)

                                                
                                                
-- stdout --
	* [flannel-305000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-305000" primary control-plane node in "flannel-305000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-305000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 13:11:53.842143    4574 out.go:291] Setting OutFile to fd 1 ...
	I0708 13:11:53.842282    4574 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:11:53.842286    4574 out.go:304] Setting ErrFile to fd 2...
	I0708 13:11:53.842288    4574 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:11:53.842431    4574 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 13:11:53.843666    4574 out.go:298] Setting JSON to false
	I0708 13:11:53.860633    4574 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4281,"bootTime":1720465232,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0708 13:11:53.860741    4574 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0708 13:11:53.867486    4574 out.go:177] * [flannel-305000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0708 13:11:53.871444    4574 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 13:11:53.871501    4574 notify.go:220] Checking for updates...
	I0708 13:11:53.877513    4574 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 13:11:53.880368    4574 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0708 13:11:53.883405    4574 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 13:11:53.886427    4574 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	I0708 13:11:53.889450    4574 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 13:11:53.892706    4574 config.go:182] Loaded profile config "multinode-969000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 13:11:53.892766    4574 config.go:182] Loaded profile config "stopped-upgrade-170000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0708 13:11:53.892809    4574 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 13:11:53.897414    4574 out.go:177] * Using the qemu2 driver based on user configuration
	I0708 13:11:53.904442    4574 start.go:297] selected driver: qemu2
	I0708 13:11:53.904449    4574 start.go:901] validating driver "qemu2" against <nil>
	I0708 13:11:53.904457    4574 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 13:11:53.906726    4574 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0708 13:11:53.909469    4574 out.go:177] * Automatically selected the socket_vmnet network
	I0708 13:11:53.912399    4574 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 13:11:53.912416    4574 cni.go:84] Creating CNI manager for "flannel"
	I0708 13:11:53.912423    4574 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0708 13:11:53.912449    4574 start.go:340] cluster config:
	{Name:flannel-305000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:flannel-305000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 13:11:53.915790    4574 iso.go:125] acquiring lock: {Name:mk0270d312faa6a295feea241390baaf586d8510 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 13:11:53.925675    4574 out.go:177] * Starting "flannel-305000" primary control-plane node in "flannel-305000" cluster
	I0708 13:11:53.929424    4574 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0708 13:11:53.929439    4574 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0708 13:11:53.929447    4574 cache.go:56] Caching tarball of preloaded images
	I0708 13:11:53.929512    4574 preload.go:173] Found /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0708 13:11:53.929519    4574 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0708 13:11:53.929580    4574 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/flannel-305000/config.json ...
	I0708 13:11:53.929590    4574 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/flannel-305000/config.json: {Name:mka40e86e747bd43bb44577decce4eebd65a3fee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 13:11:53.929926    4574 start.go:360] acquireMachinesLock for flannel-305000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 13:11:53.929957    4574 start.go:364] duration metric: took 25.542µs to acquireMachinesLock for "flannel-305000"
	I0708 13:11:53.929967    4574 start.go:93] Provisioning new machine with config: &{Name:flannel-305000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:flannel-305000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 13:11:53.930001    4574 start.go:125] createHost starting for "" (driver="qemu2")
	I0708 13:11:53.938369    4574 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0708 13:11:53.953796    4574 start.go:159] libmachine.API.Create for "flannel-305000" (driver="qemu2")
	I0708 13:11:53.953821    4574 client.go:168] LocalClient.Create starting
	I0708 13:11:53.953881    4574 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem
	I0708 13:11:53.953911    4574 main.go:141] libmachine: Decoding PEM data...
	I0708 13:11:53.953920    4574 main.go:141] libmachine: Parsing certificate...
	I0708 13:11:53.953959    4574 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem
	I0708 13:11:53.953982    4574 main.go:141] libmachine: Decoding PEM data...
	I0708 13:11:53.953991    4574 main.go:141] libmachine: Parsing certificate...
	I0708 13:11:53.954423    4574 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19195-1270/.minikube/cache/iso/arm64/minikube-v1.33.1-1720011972-19186-arm64.iso...
	I0708 13:11:54.097794    4574 main.go:141] libmachine: Creating SSH key...
	I0708 13:11:54.170582    4574 main.go:141] libmachine: Creating Disk image...
	I0708 13:11:54.170590    4574 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0708 13:11:54.170798    4574 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/flannel-305000/disk.qcow2.raw /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/flannel-305000/disk.qcow2
	I0708 13:11:54.180077    4574 main.go:141] libmachine: STDOUT: 
	I0708 13:11:54.180093    4574 main.go:141] libmachine: STDERR: 
	I0708 13:11:54.180136    4574 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/flannel-305000/disk.qcow2 +20000M
	I0708 13:11:54.188015    4574 main.go:141] libmachine: STDOUT: Image resized.
	
	I0708 13:11:54.188029    4574 main.go:141] libmachine: STDERR: 
	I0708 13:11:54.188047    4574 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/flannel-305000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/flannel-305000/disk.qcow2
	I0708 13:11:54.188050    4574 main.go:141] libmachine: Starting QEMU VM...
	I0708 13:11:54.188079    4574 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/flannel-305000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/flannel-305000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/flannel-305000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:16:7c:35:db:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/flannel-305000/disk.qcow2
	I0708 13:11:54.189614    4574 main.go:141] libmachine: STDOUT: 
	I0708 13:11:54.189627    4574 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 13:11:54.189647    4574 client.go:171] duration metric: took 235.829291ms to LocalClient.Create
	I0708 13:11:56.191799    4574 start.go:128] duration metric: took 2.261839375s to createHost
	I0708 13:11:56.191886    4574 start.go:83] releasing machines lock for "flannel-305000", held for 2.261994375s
	W0708 13:11:56.192031    4574 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:11:56.203454    4574 out.go:177] * Deleting "flannel-305000" in qemu2 ...
	W0708 13:11:56.231473    4574 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:11:56.231500    4574 start.go:728] Will try again in 5 seconds ...
	I0708 13:12:01.233424    4574 start.go:360] acquireMachinesLock for flannel-305000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 13:12:01.233689    4574 start.go:364] duration metric: took 168.833µs to acquireMachinesLock for "flannel-305000"
	I0708 13:12:01.233719    4574 start.go:93] Provisioning new machine with config: &{Name:flannel-305000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:flannel-305000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 13:12:01.233813    4574 start.go:125] createHost starting for "" (driver="qemu2")
	I0708 13:12:01.241170    4574 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0708 13:12:01.265771    4574 start.go:159] libmachine.API.Create for "flannel-305000" (driver="qemu2")
	I0708 13:12:01.265805    4574 client.go:168] LocalClient.Create starting
	I0708 13:12:01.265880    4574 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem
	I0708 13:12:01.265924    4574 main.go:141] libmachine: Decoding PEM data...
	I0708 13:12:01.265934    4574 main.go:141] libmachine: Parsing certificate...
	I0708 13:12:01.265973    4574 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem
	I0708 13:12:01.266001    4574 main.go:141] libmachine: Decoding PEM data...
	I0708 13:12:01.266013    4574 main.go:141] libmachine: Parsing certificate...
	I0708 13:12:01.266420    4574 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19195-1270/.minikube/cache/iso/arm64/minikube-v1.33.1-1720011972-19186-arm64.iso...
	I0708 13:12:01.413075    4574 main.go:141] libmachine: Creating SSH key...
	I0708 13:12:01.520212    4574 main.go:141] libmachine: Creating Disk image...
	I0708 13:12:01.520218    4574 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0708 13:12:01.520413    4574 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/flannel-305000/disk.qcow2.raw /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/flannel-305000/disk.qcow2
	I0708 13:12:01.529654    4574 main.go:141] libmachine: STDOUT: 
	I0708 13:12:01.529680    4574 main.go:141] libmachine: STDERR: 
	I0708 13:12:01.529732    4574 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/flannel-305000/disk.qcow2 +20000M
	I0708 13:12:01.538292    4574 main.go:141] libmachine: STDOUT: Image resized.
	
	I0708 13:12:01.538307    4574 main.go:141] libmachine: STDERR: 
	I0708 13:12:01.538328    4574 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/flannel-305000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/flannel-305000/disk.qcow2
	I0708 13:12:01.538333    4574 main.go:141] libmachine: Starting QEMU VM...
	I0708 13:12:01.538377    4574 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/flannel-305000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/flannel-305000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/flannel-305000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:28:8c:d0:34:69 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/flannel-305000/disk.qcow2
	I0708 13:12:01.540239    4574 main.go:141] libmachine: STDOUT: 
	I0708 13:12:01.540254    4574 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 13:12:01.540268    4574 client.go:171] duration metric: took 274.468167ms to LocalClient.Create
	I0708 13:12:03.542384    4574 start.go:128] duration metric: took 2.308615417s to createHost
	I0708 13:12:03.542446    4574 start.go:83] releasing machines lock for "flannel-305000", held for 2.308822292s
	W0708 13:12:03.542824    4574 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-305000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-305000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:12:03.551203    4574 out.go:177] 
	W0708 13:12:03.556414    4574 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0708 13:12:03.556439    4574 out.go:239] * 
	* 
	W0708 13:12:03.557732    4574 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 13:12:03.569369    4574 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-305000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-305000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.877982125s)

                                                
                                                
-- stdout --
	* [enable-default-cni-305000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-305000" primary control-plane node in "enable-default-cni-305000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-305000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 13:12:05.947828    4693 out.go:291] Setting OutFile to fd 1 ...
	I0708 13:12:05.947965    4693 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:12:05.947969    4693 out.go:304] Setting ErrFile to fd 2...
	I0708 13:12:05.947972    4693 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:12:05.948098    4693 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 13:12:05.949128    4693 out.go:298] Setting JSON to false
	I0708 13:12:05.965519    4693 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4293,"bootTime":1720465232,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0708 13:12:05.965638    4693 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0708 13:12:05.973079    4693 out.go:177] * [enable-default-cni-305000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0708 13:12:05.981005    4693 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 13:12:05.981033    4693 notify.go:220] Checking for updates...
	I0708 13:12:05.989081    4693 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 13:12:05.990453    4693 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0708 13:12:05.993098    4693 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 13:12:05.996109    4693 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	I0708 13:12:05.999157    4693 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 13:12:06.002365    4693 config.go:182] Loaded profile config "multinode-969000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 13:12:06.002431    4693 config.go:182] Loaded profile config "stopped-upgrade-170000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0708 13:12:06.002486    4693 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 13:12:06.007095    4693 out.go:177] * Using the qemu2 driver based on user configuration
	I0708 13:12:06.014122    4693 start.go:297] selected driver: qemu2
	I0708 13:12:06.014132    4693 start.go:901] validating driver "qemu2" against <nil>
	I0708 13:12:06.014141    4693 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 13:12:06.016526    4693 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0708 13:12:06.020060    4693 out.go:177] * Automatically selected the socket_vmnet network
	E0708 13:12:06.023155    4693 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0708 13:12:06.023169    4693 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 13:12:06.023184    4693 cni.go:84] Creating CNI manager for "bridge"
	I0708 13:12:06.023188    4693 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0708 13:12:06.023232    4693 start.go:340] cluster config:
	{Name:enable-default-cni-305000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:enable-default-cni-305000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 13:12:06.027176    4693 iso.go:125] acquiring lock: {Name:mk0270d312faa6a295feea241390baaf586d8510 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 13:12:06.035087    4693 out.go:177] * Starting "enable-default-cni-305000" primary control-plane node in "enable-default-cni-305000" cluster
	I0708 13:12:06.038016    4693 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0708 13:12:06.038030    4693 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0708 13:12:06.038036    4693 cache.go:56] Caching tarball of preloaded images
	I0708 13:12:06.038091    4693 preload.go:173] Found /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0708 13:12:06.038096    4693 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0708 13:12:06.038143    4693 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/enable-default-cni-305000/config.json ...
	I0708 13:12:06.038154    4693 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/enable-default-cni-305000/config.json: {Name:mkfe6392b1e2044b407fea1261e43f119e8f3023 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 13:12:06.038425    4693 start.go:360] acquireMachinesLock for enable-default-cni-305000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 13:12:06.038458    4693 start.go:364] duration metric: took 26.833µs to acquireMachinesLock for "enable-default-cni-305000"
	I0708 13:12:06.038471    4693 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-305000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.2 ClusterName:enable-default-cni-305000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 13:12:06.038508    4693 start.go:125] createHost starting for "" (driver="qemu2")
	I0708 13:12:06.043087    4693 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0708 13:12:06.058247    4693 start.go:159] libmachine.API.Create for "enable-default-cni-305000" (driver="qemu2")
	I0708 13:12:06.058270    4693 client.go:168] LocalClient.Create starting
	I0708 13:12:06.058332    4693 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem
	I0708 13:12:06.058360    4693 main.go:141] libmachine: Decoding PEM data...
	I0708 13:12:06.058367    4693 main.go:141] libmachine: Parsing certificate...
	I0708 13:12:06.058402    4693 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem
	I0708 13:12:06.058424    4693 main.go:141] libmachine: Decoding PEM data...
	I0708 13:12:06.058432    4693 main.go:141] libmachine: Parsing certificate...
	I0708 13:12:06.058791    4693 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19195-1270/.minikube/cache/iso/arm64/minikube-v1.33.1-1720011972-19186-arm64.iso...
	I0708 13:12:06.202148    4693 main.go:141] libmachine: Creating SSH key...
	I0708 13:12:06.403407    4693 main.go:141] libmachine: Creating Disk image...
	I0708 13:12:06.403418    4693 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0708 13:12:06.403634    4693 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/enable-default-cni-305000/disk.qcow2.raw /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/enable-default-cni-305000/disk.qcow2
	I0708 13:12:06.413505    4693 main.go:141] libmachine: STDOUT: 
	I0708 13:12:06.413525    4693 main.go:141] libmachine: STDERR: 
	I0708 13:12:06.413585    4693 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/enable-default-cni-305000/disk.qcow2 +20000M
	I0708 13:12:06.421602    4693 main.go:141] libmachine: STDOUT: Image resized.
	
	I0708 13:12:06.421617    4693 main.go:141] libmachine: STDERR: 
	I0708 13:12:06.421628    4693 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/enable-default-cni-305000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/enable-default-cni-305000/disk.qcow2
	I0708 13:12:06.421633    4693 main.go:141] libmachine: Starting QEMU VM...
	I0708 13:12:06.421666    4693 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/enable-default-cni-305000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/enable-default-cni-305000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/enable-default-cni-305000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:21:0b:2b:c8:f5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/enable-default-cni-305000/disk.qcow2
	I0708 13:12:06.423374    4693 main.go:141] libmachine: STDOUT: 
	I0708 13:12:06.423388    4693 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 13:12:06.423409    4693 client.go:171] duration metric: took 365.144291ms to LocalClient.Create
	I0708 13:12:08.423560    4693 start.go:128] duration metric: took 2.385117875s to createHost
	I0708 13:12:08.423583    4693 start.go:83] releasing machines lock for "enable-default-cni-305000", held for 2.385198333s
	W0708 13:12:08.423638    4693 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:12:08.432196    4693 out.go:177] * Deleting "enable-default-cni-305000" in qemu2 ...
	W0708 13:12:08.444287    4693 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:12:08.444312    4693 start.go:728] Will try again in 5 seconds ...
	I0708 13:12:13.446266    4693 start.go:360] acquireMachinesLock for enable-default-cni-305000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 13:12:13.446582    4693 start.go:364] duration metric: took 272.083µs to acquireMachinesLock for "enable-default-cni-305000"
	I0708 13:12:13.446662    4693 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-305000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.2 ClusterName:enable-default-cni-305000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 13:12:13.446791    4693 start.go:125] createHost starting for "" (driver="qemu2")
	I0708 13:12:13.456104    4693 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0708 13:12:13.494874    4693 start.go:159] libmachine.API.Create for "enable-default-cni-305000" (driver="qemu2")
	I0708 13:12:13.494917    4693 client.go:168] LocalClient.Create starting
	I0708 13:12:13.495041    4693 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem
	I0708 13:12:13.495108    4693 main.go:141] libmachine: Decoding PEM data...
	I0708 13:12:13.495121    4693 main.go:141] libmachine: Parsing certificate...
	I0708 13:12:13.495186    4693 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem
	I0708 13:12:13.495227    4693 main.go:141] libmachine: Decoding PEM data...
	I0708 13:12:13.495239    4693 main.go:141] libmachine: Parsing certificate...
	I0708 13:12:13.495671    4693 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19195-1270/.minikube/cache/iso/arm64/minikube-v1.33.1-1720011972-19186-arm64.iso...
	I0708 13:12:13.646303    4693 main.go:141] libmachine: Creating SSH key...
	I0708 13:12:13.739881    4693 main.go:141] libmachine: Creating Disk image...
	I0708 13:12:13.739896    4693 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0708 13:12:13.740107    4693 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/enable-default-cni-305000/disk.qcow2.raw /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/enable-default-cni-305000/disk.qcow2
	I0708 13:12:13.749304    4693 main.go:141] libmachine: STDOUT: 
	I0708 13:12:13.749329    4693 main.go:141] libmachine: STDERR: 
	I0708 13:12:13.749386    4693 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/enable-default-cni-305000/disk.qcow2 +20000M
	I0708 13:12:13.757337    4693 main.go:141] libmachine: STDOUT: Image resized.
	
	I0708 13:12:13.757351    4693 main.go:141] libmachine: STDERR: 
	I0708 13:12:13.757368    4693 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/enable-default-cni-305000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/enable-default-cni-305000/disk.qcow2
	I0708 13:12:13.757378    4693 main.go:141] libmachine: Starting QEMU VM...
	I0708 13:12:13.757418    4693 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/enable-default-cni-305000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/enable-default-cni-305000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/enable-default-cni-305000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:64:b8:90:4e:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/enable-default-cni-305000/disk.qcow2
	I0708 13:12:13.759114    4693 main.go:141] libmachine: STDOUT: 
	I0708 13:12:13.759128    4693 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 13:12:13.759142    4693 client.go:171] duration metric: took 264.228375ms to LocalClient.Create
	I0708 13:12:15.761299    4693 start.go:128] duration metric: took 2.31454425s to createHost
	I0708 13:12:15.761375    4693 start.go:83] releasing machines lock for "enable-default-cni-305000", held for 2.314849916s
	W0708 13:12:15.761809    4693 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-305000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-305000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:12:15.768441    4693 out.go:177] 
	W0708 13:12:15.772518    4693 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0708 13:12:15.772575    4693 out.go:239] * 
	* 
	W0708 13:12:15.775186    4693 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 13:12:15.783516    4693 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-305000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-305000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.811740334s)

                                                
                                                
-- stdout --
	* [bridge-305000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-305000" primary control-plane node in "bridge-305000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-305000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 13:12:18.006787    4804 out.go:291] Setting OutFile to fd 1 ...
	I0708 13:12:18.006901    4804 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:12:18.006904    4804 out.go:304] Setting ErrFile to fd 2...
	I0708 13:12:18.006907    4804 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:12:18.007025    4804 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 13:12:18.008117    4804 out.go:298] Setting JSON to false
	I0708 13:12:18.024604    4804 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4306,"bootTime":1720465232,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0708 13:12:18.024700    4804 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0708 13:12:18.028678    4804 out.go:177] * [bridge-305000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0708 13:12:18.035540    4804 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 13:12:18.035619    4804 notify.go:220] Checking for updates...
	I0708 13:12:18.043511    4804 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 13:12:18.046492    4804 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0708 13:12:18.050513    4804 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 13:12:18.053522    4804 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	I0708 13:12:18.056453    4804 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 13:12:18.059825    4804 config.go:182] Loaded profile config "multinode-969000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 13:12:18.059897    4804 config.go:182] Loaded profile config "stopped-upgrade-170000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0708 13:12:18.059943    4804 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 13:12:18.063540    4804 out.go:177] * Using the qemu2 driver based on user configuration
	I0708 13:12:18.070500    4804 start.go:297] selected driver: qemu2
	I0708 13:12:18.070506    4804 start.go:901] validating driver "qemu2" against <nil>
	I0708 13:12:18.070517    4804 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 13:12:18.072681    4804 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0708 13:12:18.076562    4804 out.go:177] * Automatically selected the socket_vmnet network
	I0708 13:12:18.079551    4804 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 13:12:18.079571    4804 cni.go:84] Creating CNI manager for "bridge"
	I0708 13:12:18.079576    4804 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0708 13:12:18.079607    4804 start.go:340] cluster config:
	{Name:bridge-305000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:bridge-305000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 13:12:18.083176    4804 iso.go:125] acquiring lock: {Name:mk0270d312faa6a295feea241390baaf586d8510 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 13:12:18.090492    4804 out.go:177] * Starting "bridge-305000" primary control-plane node in "bridge-305000" cluster
	I0708 13:12:18.094517    4804 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0708 13:12:18.094530    4804 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0708 13:12:18.094538    4804 cache.go:56] Caching tarball of preloaded images
	I0708 13:12:18.094585    4804 preload.go:173] Found /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0708 13:12:18.094590    4804 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0708 13:12:18.094639    4804 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/bridge-305000/config.json ...
	I0708 13:12:18.094649    4804 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/bridge-305000/config.json: {Name:mkdb5d5743b88a904f5ba86da01944126c97a31e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 13:12:18.094903    4804 start.go:360] acquireMachinesLock for bridge-305000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 13:12:18.094932    4804 start.go:364] duration metric: took 24.25µs to acquireMachinesLock for "bridge-305000"
	I0708 13:12:18.094946    4804 start.go:93] Provisioning new machine with config: &{Name:bridge-305000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:bridge-305000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 13:12:18.094972    4804 start.go:125] createHost starting for "" (driver="qemu2")
	I0708 13:12:18.098470    4804 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0708 13:12:18.113692    4804 start.go:159] libmachine.API.Create for "bridge-305000" (driver="qemu2")
	I0708 13:12:18.113725    4804 client.go:168] LocalClient.Create starting
	I0708 13:12:18.113791    4804 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem
	I0708 13:12:18.113821    4804 main.go:141] libmachine: Decoding PEM data...
	I0708 13:12:18.113831    4804 main.go:141] libmachine: Parsing certificate...
	I0708 13:12:18.113867    4804 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem
	I0708 13:12:18.113890    4804 main.go:141] libmachine: Decoding PEM data...
	I0708 13:12:18.113898    4804 main.go:141] libmachine: Parsing certificate...
	I0708 13:12:18.114372    4804 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19195-1270/.minikube/cache/iso/arm64/minikube-v1.33.1-1720011972-19186-arm64.iso...
	I0708 13:12:18.258565    4804 main.go:141] libmachine: Creating SSH key...
	I0708 13:12:18.404637    4804 main.go:141] libmachine: Creating Disk image...
	I0708 13:12:18.404651    4804 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0708 13:12:18.404876    4804 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/bridge-305000/disk.qcow2.raw /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/bridge-305000/disk.qcow2
	I0708 13:12:18.414446    4804 main.go:141] libmachine: STDOUT: 
	I0708 13:12:18.414468    4804 main.go:141] libmachine: STDERR: 
	I0708 13:12:18.414523    4804 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/bridge-305000/disk.qcow2 +20000M
	I0708 13:12:18.422526    4804 main.go:141] libmachine: STDOUT: Image resized.
	
	I0708 13:12:18.422539    4804 main.go:141] libmachine: STDERR: 
	I0708 13:12:18.422555    4804 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/bridge-305000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/bridge-305000/disk.qcow2
	I0708 13:12:18.422561    4804 main.go:141] libmachine: Starting QEMU VM...
	I0708 13:12:18.422590    4804 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/bridge-305000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/bridge-305000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/bridge-305000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:68:84:ad:0c:7d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/bridge-305000/disk.qcow2
	I0708 13:12:18.424280    4804 main.go:141] libmachine: STDOUT: 
	I0708 13:12:18.424298    4804 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 13:12:18.424318    4804 client.go:171] duration metric: took 310.597833ms to LocalClient.Create
	I0708 13:12:20.426431    4804 start.go:128] duration metric: took 2.331508834s to createHost
	I0708 13:12:20.426499    4804 start.go:83] releasing machines lock for "bridge-305000", held for 2.331637709s
	W0708 13:12:20.426646    4804 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:12:20.436256    4804 out.go:177] * Deleting "bridge-305000" in qemu2 ...
	W0708 13:12:20.462810    4804 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:12:20.462842    4804 start.go:728] Will try again in 5 seconds ...
	I0708 13:12:25.464821    4804 start.go:360] acquireMachinesLock for bridge-305000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 13:12:25.465073    4804 start.go:364] duration metric: took 199.125µs to acquireMachinesLock for "bridge-305000"
	I0708 13:12:25.465096    4804 start.go:93] Provisioning new machine with config: &{Name:bridge-305000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:bridge-305000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 13:12:25.465177    4804 start.go:125] createHost starting for "" (driver="qemu2")
	I0708 13:12:25.471511    4804 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0708 13:12:25.498711    4804 start.go:159] libmachine.API.Create for "bridge-305000" (driver="qemu2")
	I0708 13:12:25.498756    4804 client.go:168] LocalClient.Create starting
	I0708 13:12:25.498846    4804 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem
	I0708 13:12:25.498899    4804 main.go:141] libmachine: Decoding PEM data...
	I0708 13:12:25.498921    4804 main.go:141] libmachine: Parsing certificate...
	I0708 13:12:25.498969    4804 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem
	I0708 13:12:25.499001    4804 main.go:141] libmachine: Decoding PEM data...
	I0708 13:12:25.499014    4804 main.go:141] libmachine: Parsing certificate...
	I0708 13:12:25.499401    4804 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19195-1270/.minikube/cache/iso/arm64/minikube-v1.33.1-1720011972-19186-arm64.iso...
	I0708 13:12:25.647050    4804 main.go:141] libmachine: Creating SSH key...
	I0708 13:12:25.728957    4804 main.go:141] libmachine: Creating Disk image...
	I0708 13:12:25.728964    4804 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0708 13:12:25.729163    4804 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/bridge-305000/disk.qcow2.raw /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/bridge-305000/disk.qcow2
	I0708 13:12:25.738921    4804 main.go:141] libmachine: STDOUT: 
	I0708 13:12:25.738947    4804 main.go:141] libmachine: STDERR: 
	I0708 13:12:25.738995    4804 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/bridge-305000/disk.qcow2 +20000M
	I0708 13:12:25.747132    4804 main.go:141] libmachine: STDOUT: Image resized.
	
	I0708 13:12:25.747146    4804 main.go:141] libmachine: STDERR: 
	I0708 13:12:25.747161    4804 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/bridge-305000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/bridge-305000/disk.qcow2
	I0708 13:12:25.747166    4804 main.go:141] libmachine: Starting QEMU VM...
	I0708 13:12:25.747201    4804 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/bridge-305000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/bridge-305000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/bridge-305000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:1c:6b:f6:90:1f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/bridge-305000/disk.qcow2
	I0708 13:12:25.748809    4804 main.go:141] libmachine: STDOUT: 
	I0708 13:12:25.748824    4804 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 13:12:25.748836    4804 client.go:171] duration metric: took 250.08275ms to LocalClient.Create
	I0708 13:12:27.750979    4804 start.go:128] duration metric: took 2.285844708s to createHost
	I0708 13:12:27.751059    4804 start.go:83] releasing machines lock for "bridge-305000", held for 2.286047917s
	W0708 13:12:27.751433    4804 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-305000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-305000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:12:27.760011    4804 out.go:177] 
	W0708 13:12:27.767147    4804 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0708 13:12:27.767189    4804 out.go:239] * 
	* 
	W0708 13:12:27.769622    4804 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 13:12:27.778044    4804 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-305000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-305000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.753168333s)

                                                
                                                
-- stdout --
	* [kubenet-305000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-305000" primary control-plane node in "kubenet-305000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-305000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 13:12:29.986297    4914 out.go:291] Setting OutFile to fd 1 ...
	I0708 13:12:29.986438    4914 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:12:29.986442    4914 out.go:304] Setting ErrFile to fd 2...
	I0708 13:12:29.986444    4914 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:12:29.986576    4914 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 13:12:29.987685    4914 out.go:298] Setting JSON to false
	I0708 13:12:30.003583    4914 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4317,"bootTime":1720465232,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0708 13:12:30.003651    4914 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0708 13:12:30.010822    4914 out.go:177] * [kubenet-305000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0708 13:12:30.018869    4914 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 13:12:30.018908    4914 notify.go:220] Checking for updates...
	I0708 13:12:30.025812    4914 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 13:12:30.028804    4914 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0708 13:12:30.031850    4914 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 13:12:30.034808    4914 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	I0708 13:12:30.037867    4914 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 13:12:30.041091    4914 config.go:182] Loaded profile config "multinode-969000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 13:12:30.041153    4914 config.go:182] Loaded profile config "stopped-upgrade-170000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0708 13:12:30.041199    4914 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 13:12:30.044805    4914 out.go:177] * Using the qemu2 driver based on user configuration
	I0708 13:12:30.051697    4914 start.go:297] selected driver: qemu2
	I0708 13:12:30.051703    4914 start.go:901] validating driver "qemu2" against <nil>
	I0708 13:12:30.051712    4914 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 13:12:30.053900    4914 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0708 13:12:30.057757    4914 out.go:177] * Automatically selected the socket_vmnet network
	I0708 13:12:30.060845    4914 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 13:12:30.060857    4914 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0708 13:12:30.060883    4914 start.go:340] cluster config:
	{Name:kubenet-305000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:kubenet-305000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 13:12:30.064335    4914 iso.go:125] acquiring lock: {Name:mk0270d312faa6a295feea241390baaf586d8510 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 13:12:30.071771    4914 out.go:177] * Starting "kubenet-305000" primary control-plane node in "kubenet-305000" cluster
	I0708 13:12:30.075664    4914 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0708 13:12:30.075678    4914 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0708 13:12:30.075689    4914 cache.go:56] Caching tarball of preloaded images
	I0708 13:12:30.075742    4914 preload.go:173] Found /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0708 13:12:30.075747    4914 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0708 13:12:30.075817    4914 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/kubenet-305000/config.json ...
	I0708 13:12:30.075835    4914 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/kubenet-305000/config.json: {Name:mk564fb95208bd13d97c19b9e7e6f656269fbdf6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 13:12:30.076160    4914 start.go:360] acquireMachinesLock for kubenet-305000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 13:12:30.076188    4914 start.go:364] duration metric: took 23.625µs to acquireMachinesLock for "kubenet-305000"
	I0708 13:12:30.076198    4914 start.go:93] Provisioning new machine with config: &{Name:kubenet-305000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:kubenet-305000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 13:12:30.076227    4914 start.go:125] createHost starting for "" (driver="qemu2")
	I0708 13:12:30.083768    4914 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0708 13:12:30.098671    4914 start.go:159] libmachine.API.Create for "kubenet-305000" (driver="qemu2")
	I0708 13:12:30.098692    4914 client.go:168] LocalClient.Create starting
	I0708 13:12:30.098755    4914 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem
	I0708 13:12:30.098790    4914 main.go:141] libmachine: Decoding PEM data...
	I0708 13:12:30.098800    4914 main.go:141] libmachine: Parsing certificate...
	I0708 13:12:30.098833    4914 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem
	I0708 13:12:30.098856    4914 main.go:141] libmachine: Decoding PEM data...
	I0708 13:12:30.098864    4914 main.go:141] libmachine: Parsing certificate...
	I0708 13:12:30.099219    4914 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19195-1270/.minikube/cache/iso/arm64/minikube-v1.33.1-1720011972-19186-arm64.iso...
	I0708 13:12:30.244510    4914 main.go:141] libmachine: Creating SSH key...
	I0708 13:12:30.370160    4914 main.go:141] libmachine: Creating Disk image...
	I0708 13:12:30.370174    4914 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0708 13:12:30.370373    4914 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kubenet-305000/disk.qcow2.raw /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kubenet-305000/disk.qcow2
	I0708 13:12:30.380063    4914 main.go:141] libmachine: STDOUT: 
	I0708 13:12:30.380085    4914 main.go:141] libmachine: STDERR: 
	I0708 13:12:30.380149    4914 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kubenet-305000/disk.qcow2 +20000M
	I0708 13:12:30.388308    4914 main.go:141] libmachine: STDOUT: Image resized.
	
	I0708 13:12:30.388326    4914 main.go:141] libmachine: STDERR: 
	I0708 13:12:30.388355    4914 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kubenet-305000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kubenet-305000/disk.qcow2
	I0708 13:12:30.388360    4914 main.go:141] libmachine: Starting QEMU VM...
	I0708 13:12:30.388398    4914 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kubenet-305000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kubenet-305000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kubenet-305000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:89:2d:50:e6:ab -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kubenet-305000/disk.qcow2
	I0708 13:12:30.390051    4914 main.go:141] libmachine: STDOUT: 
	I0708 13:12:30.390062    4914 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 13:12:30.390080    4914 client.go:171] duration metric: took 291.393458ms to LocalClient.Create
	I0708 13:12:32.392146    4914 start.go:128] duration metric: took 2.315983208s to createHost
	I0708 13:12:32.392187    4914 start.go:83] releasing machines lock for "kubenet-305000", held for 2.316065s
	W0708 13:12:32.392248    4914 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:12:32.401062    4914 out.go:177] * Deleting "kubenet-305000" in qemu2 ...
	W0708 13:12:32.421968    4914 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:12:32.421979    4914 start.go:728] Will try again in 5 seconds ...
	I0708 13:12:37.423982    4914 start.go:360] acquireMachinesLock for kubenet-305000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 13:12:37.424274    4914 start.go:364] duration metric: took 229.167µs to acquireMachinesLock for "kubenet-305000"
	I0708 13:12:37.424351    4914 start.go:93] Provisioning new machine with config: &{Name:kubenet-305000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:kubenet-305000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 13:12:37.424498    4914 start.go:125] createHost starting for "" (driver="qemu2")
	I0708 13:12:37.433888    4914 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0708 13:12:37.464192    4914 start.go:159] libmachine.API.Create for "kubenet-305000" (driver="qemu2")
	I0708 13:12:37.464236    4914 client.go:168] LocalClient.Create starting
	I0708 13:12:37.464334    4914 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem
	I0708 13:12:37.464396    4914 main.go:141] libmachine: Decoding PEM data...
	I0708 13:12:37.464408    4914 main.go:141] libmachine: Parsing certificate...
	I0708 13:12:37.464461    4914 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem
	I0708 13:12:37.464499    4914 main.go:141] libmachine: Decoding PEM data...
	I0708 13:12:37.464516    4914 main.go:141] libmachine: Parsing certificate...
	I0708 13:12:37.465026    4914 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19195-1270/.minikube/cache/iso/arm64/minikube-v1.33.1-1720011972-19186-arm64.iso...
	I0708 13:12:37.614178    4914 main.go:141] libmachine: Creating SSH key...
	I0708 13:12:37.655858    4914 main.go:141] libmachine: Creating Disk image...
	I0708 13:12:37.655864    4914 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0708 13:12:37.656074    4914 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kubenet-305000/disk.qcow2.raw /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kubenet-305000/disk.qcow2
	I0708 13:12:37.665324    4914 main.go:141] libmachine: STDOUT: 
	I0708 13:12:37.665342    4914 main.go:141] libmachine: STDERR: 
	I0708 13:12:37.665393    4914 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kubenet-305000/disk.qcow2 +20000M
	I0708 13:12:37.673269    4914 main.go:141] libmachine: STDOUT: Image resized.
	
	I0708 13:12:37.673284    4914 main.go:141] libmachine: STDERR: 
	I0708 13:12:37.673295    4914 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kubenet-305000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kubenet-305000/disk.qcow2
	I0708 13:12:37.673299    4914 main.go:141] libmachine: Starting QEMU VM...
	I0708 13:12:37.673338    4914 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kubenet-305000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kubenet-305000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kubenet-305000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:2c:78:7c:d2:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/kubenet-305000/disk.qcow2
	I0708 13:12:37.675096    4914 main.go:141] libmachine: STDOUT: 
	I0708 13:12:37.675113    4914 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 13:12:37.675126    4914 client.go:171] duration metric: took 210.891584ms to LocalClient.Create
	I0708 13:12:39.677134    4914 start.go:128] duration metric: took 2.25269775s to createHost
	I0708 13:12:39.677145    4914 start.go:83] releasing machines lock for "kubenet-305000", held for 2.252935833s
	W0708 13:12:39.677252    4914 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-305000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-305000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:12:39.685320    4914 out.go:177] 
	W0708 13:12:39.690537    4914 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0708 13:12:39.690550    4914 out.go:239] * 
	* 
	W0708 13:12:39.691080    4914 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 13:12:39.702461    4914 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-305000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-305000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.838877083s)

                                                
                                                
-- stdout --
	* [custom-flannel-305000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-305000" primary control-plane node in "custom-flannel-305000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-305000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 13:12:41.860090    5026 out.go:291] Setting OutFile to fd 1 ...
	I0708 13:12:41.860234    5026 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:12:41.860238    5026 out.go:304] Setting ErrFile to fd 2...
	I0708 13:12:41.860241    5026 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:12:41.860359    5026 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 13:12:41.861607    5026 out.go:298] Setting JSON to false
	I0708 13:12:41.877978    5026 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4329,"bootTime":1720465232,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0708 13:12:41.878044    5026 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0708 13:12:41.883298    5026 out.go:177] * [custom-flannel-305000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0708 13:12:41.890159    5026 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 13:12:41.890198    5026 notify.go:220] Checking for updates...
	I0708 13:12:41.897184    5026 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 13:12:41.900174    5026 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0708 13:12:41.903162    5026 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 13:12:41.906096    5026 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	I0708 13:12:41.909132    5026 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 13:12:41.912559    5026 config.go:182] Loaded profile config "multinode-969000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 13:12:41.912636    5026 config.go:182] Loaded profile config "stopped-upgrade-170000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0708 13:12:41.912681    5026 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 13:12:41.915073    5026 out.go:177] * Using the qemu2 driver based on user configuration
	I0708 13:12:41.922136    5026 start.go:297] selected driver: qemu2
	I0708 13:12:41.922143    5026 start.go:901] validating driver "qemu2" against <nil>
	I0708 13:12:41.922149    5026 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 13:12:41.924407    5026 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0708 13:12:41.925816    5026 out.go:177] * Automatically selected the socket_vmnet network
	I0708 13:12:41.929184    5026 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 13:12:41.929235    5026 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0708 13:12:41.929243    5026 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0708 13:12:41.929272    5026 start.go:340] cluster config:
	{Name:custom-flannel-305000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:custom-flannel-305000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 13:12:41.932801    5026 iso.go:125] acquiring lock: {Name:mk0270d312faa6a295feea241390baaf586d8510 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 13:12:41.940122    5026 out.go:177] * Starting "custom-flannel-305000" primary control-plane node in "custom-flannel-305000" cluster
	I0708 13:12:41.944169    5026 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0708 13:12:41.944187    5026 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0708 13:12:41.944195    5026 cache.go:56] Caching tarball of preloaded images
	I0708 13:12:41.944259    5026 preload.go:173] Found /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0708 13:12:41.944266    5026 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0708 13:12:41.944323    5026 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/custom-flannel-305000/config.json ...
	I0708 13:12:41.944334    5026 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/custom-flannel-305000/config.json: {Name:mkc1e28fd9eb52285a35a99f37718d0f85790798 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 13:12:41.944636    5026 start.go:360] acquireMachinesLock for custom-flannel-305000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 13:12:41.944668    5026 start.go:364] duration metric: took 23.875µs to acquireMachinesLock for "custom-flannel-305000"
	I0708 13:12:41.944677    5026 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-305000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.2 ClusterName:custom-flannel-305000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 13:12:41.944715    5026 start.go:125] createHost starting for "" (driver="qemu2")
	I0708 13:12:41.949108    5026 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0708 13:12:41.964264    5026 start.go:159] libmachine.API.Create for "custom-flannel-305000" (driver="qemu2")
	I0708 13:12:41.964291    5026 client.go:168] LocalClient.Create starting
	I0708 13:12:41.964345    5026 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem
	I0708 13:12:41.964374    5026 main.go:141] libmachine: Decoding PEM data...
	I0708 13:12:41.964383    5026 main.go:141] libmachine: Parsing certificate...
	I0708 13:12:41.964417    5026 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem
	I0708 13:12:41.964439    5026 main.go:141] libmachine: Decoding PEM data...
	I0708 13:12:41.964445    5026 main.go:141] libmachine: Parsing certificate...
	I0708 13:12:41.964848    5026 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19195-1270/.minikube/cache/iso/arm64/minikube-v1.33.1-1720011972-19186-arm64.iso...
	I0708 13:12:42.108293    5026 main.go:141] libmachine: Creating SSH key...
	I0708 13:12:42.241246    5026 main.go:141] libmachine: Creating Disk image...
	I0708 13:12:42.241252    5026 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0708 13:12:42.241455    5026 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/custom-flannel-305000/disk.qcow2.raw /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/custom-flannel-305000/disk.qcow2
	I0708 13:12:42.250952    5026 main.go:141] libmachine: STDOUT: 
	I0708 13:12:42.250977    5026 main.go:141] libmachine: STDERR: 
	I0708 13:12:42.251032    5026 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/custom-flannel-305000/disk.qcow2 +20000M
	I0708 13:12:42.258926    5026 main.go:141] libmachine: STDOUT: Image resized.
	
	I0708 13:12:42.258944    5026 main.go:141] libmachine: STDERR: 
	I0708 13:12:42.258958    5026 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/custom-flannel-305000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/custom-flannel-305000/disk.qcow2
	I0708 13:12:42.258961    5026 main.go:141] libmachine: Starting QEMU VM...
	I0708 13:12:42.258990    5026 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/custom-flannel-305000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/custom-flannel-305000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/custom-flannel-305000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:8a:e2:3e:d8:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/custom-flannel-305000/disk.qcow2
	I0708 13:12:42.260667    5026 main.go:141] libmachine: STDOUT: 
	I0708 13:12:42.260685    5026 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 13:12:42.260709    5026 client.go:171] duration metric: took 296.423375ms to LocalClient.Create
	I0708 13:12:44.262822    5026 start.go:128] duration metric: took 2.318163833s to createHost
	I0708 13:12:44.262873    5026 start.go:83] releasing machines lock for "custom-flannel-305000", held for 2.318274834s
	W0708 13:12:44.262944    5026 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:12:44.276553    5026 out.go:177] * Deleting "custom-flannel-305000" in qemu2 ...
	W0708 13:12:44.300863    5026 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:12:44.300886    5026 start.go:728] Will try again in 5 seconds ...
	I0708 13:12:49.303068    5026 start.go:360] acquireMachinesLock for custom-flannel-305000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 13:12:49.303661    5026 start.go:364] duration metric: took 473.75µs to acquireMachinesLock for "custom-flannel-305000"
	I0708 13:12:49.303793    5026 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-305000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.2 ClusterName:custom-flannel-305000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 13:12:49.304080    5026 start.go:125] createHost starting for "" (driver="qemu2")
	I0708 13:12:49.312658    5026 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0708 13:12:49.361271    5026 start.go:159] libmachine.API.Create for "custom-flannel-305000" (driver="qemu2")
	I0708 13:12:49.361330    5026 client.go:168] LocalClient.Create starting
	I0708 13:12:49.361533    5026 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem
	I0708 13:12:49.361598    5026 main.go:141] libmachine: Decoding PEM data...
	I0708 13:12:49.361616    5026 main.go:141] libmachine: Parsing certificate...
	I0708 13:12:49.361678    5026 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem
	I0708 13:12:49.361723    5026 main.go:141] libmachine: Decoding PEM data...
	I0708 13:12:49.361738    5026 main.go:141] libmachine: Parsing certificate...
	I0708 13:12:49.362278    5026 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19195-1270/.minikube/cache/iso/arm64/minikube-v1.33.1-1720011972-19186-arm64.iso...
	I0708 13:12:49.516356    5026 main.go:141] libmachine: Creating SSH key...
	I0708 13:12:49.609062    5026 main.go:141] libmachine: Creating Disk image...
	I0708 13:12:49.609071    5026 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0708 13:12:49.609315    5026 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/custom-flannel-305000/disk.qcow2.raw /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/custom-flannel-305000/disk.qcow2
	I0708 13:12:49.618682    5026 main.go:141] libmachine: STDOUT: 
	I0708 13:12:49.618701    5026 main.go:141] libmachine: STDERR: 
	I0708 13:12:49.618750    5026 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/custom-flannel-305000/disk.qcow2 +20000M
	I0708 13:12:49.626620    5026 main.go:141] libmachine: STDOUT: Image resized.
	
	I0708 13:12:49.626636    5026 main.go:141] libmachine: STDERR: 
	I0708 13:12:49.626646    5026 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/custom-flannel-305000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/custom-flannel-305000/disk.qcow2
	I0708 13:12:49.626651    5026 main.go:141] libmachine: Starting QEMU VM...
	I0708 13:12:49.626681    5026 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/custom-flannel-305000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/custom-flannel-305000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/custom-flannel-305000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:9b:7c:c5:ad:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/custom-flannel-305000/disk.qcow2
	I0708 13:12:49.628341    5026 main.go:141] libmachine: STDOUT: 
	I0708 13:12:49.628356    5026 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 13:12:49.628368    5026 client.go:171] duration metric: took 267.040042ms to LocalClient.Create
	I0708 13:12:51.630471    5026 start.go:128] duration metric: took 2.326436s to createHost
	I0708 13:12:51.630529    5026 start.go:83] releasing machines lock for "custom-flannel-305000", held for 2.326919666s
	W0708 13:12:51.630922    5026 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-305000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-305000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:12:51.640488    5026 out.go:177] 
	W0708 13:12:51.646535    5026 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0708 13:12:51.646582    5026 out.go:239] * 
	* 
	W0708 13:12:51.649398    5026 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 13:12:51.657454    5026 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-305000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-305000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.754806708s)

                                                
                                                
-- stdout --
	* [calico-305000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-305000" primary control-plane node in "calico-305000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-305000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 13:12:54.030707    5145 out.go:291] Setting OutFile to fd 1 ...
	I0708 13:12:54.030854    5145 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:12:54.030857    5145 out.go:304] Setting ErrFile to fd 2...
	I0708 13:12:54.030859    5145 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:12:54.030972    5145 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 13:12:54.032051    5145 out.go:298] Setting JSON to false
	I0708 13:12:54.048449    5145 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4342,"bootTime":1720465232,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0708 13:12:54.048514    5145 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0708 13:12:54.055428    5145 out.go:177] * [calico-305000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0708 13:12:54.063376    5145 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 13:12:54.063435    5145 notify.go:220] Checking for updates...
	I0708 13:12:54.070417    5145 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 13:12:54.073424    5145 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0708 13:12:54.076456    5145 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 13:12:54.079409    5145 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	I0708 13:12:54.082398    5145 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 13:12:54.085709    5145 config.go:182] Loaded profile config "multinode-969000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 13:12:54.085778    5145 config.go:182] Loaded profile config "stopped-upgrade-170000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0708 13:12:54.085850    5145 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 13:12:54.089420    5145 out.go:177] * Using the qemu2 driver based on user configuration
	I0708 13:12:54.096392    5145 start.go:297] selected driver: qemu2
	I0708 13:12:54.096398    5145 start.go:901] validating driver "qemu2" against <nil>
	I0708 13:12:54.096404    5145 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 13:12:54.098517    5145 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0708 13:12:54.101407    5145 out.go:177] * Automatically selected the socket_vmnet network
	I0708 13:12:54.102780    5145 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 13:12:54.102817    5145 cni.go:84] Creating CNI manager for "calico"
	I0708 13:12:54.102821    5145 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0708 13:12:54.102852    5145 start.go:340] cluster config:
	{Name:calico-305000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:calico-305000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 13:12:54.106240    5145 iso.go:125] acquiring lock: {Name:mk0270d312faa6a295feea241390baaf586d8510 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 13:12:54.113429    5145 out.go:177] * Starting "calico-305000" primary control-plane node in "calico-305000" cluster
	I0708 13:12:54.117363    5145 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0708 13:12:54.117374    5145 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0708 13:12:54.117381    5145 cache.go:56] Caching tarball of preloaded images
	I0708 13:12:54.117431    5145 preload.go:173] Found /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0708 13:12:54.117435    5145 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0708 13:12:54.117481    5145 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/calico-305000/config.json ...
	I0708 13:12:54.117490    5145 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/calico-305000/config.json: {Name:mk93571b02462cc6ce0975e1710cbd1924fe4a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 13:12:54.117935    5145 start.go:360] acquireMachinesLock for calico-305000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 13:12:54.117963    5145 start.go:364] duration metric: took 23.542µs to acquireMachinesLock for "calico-305000"
	I0708 13:12:54.117978    5145 start.go:93] Provisioning new machine with config: &{Name:calico-305000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:calico-305000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 13:12:54.118052    5145 start.go:125] createHost starting for "" (driver="qemu2")
	I0708 13:12:54.126403    5145 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0708 13:12:54.141720    5145 start.go:159] libmachine.API.Create for "calico-305000" (driver="qemu2")
	I0708 13:12:54.141745    5145 client.go:168] LocalClient.Create starting
	I0708 13:12:54.141809    5145 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem
	I0708 13:12:54.141838    5145 main.go:141] libmachine: Decoding PEM data...
	I0708 13:12:54.141847    5145 main.go:141] libmachine: Parsing certificate...
	I0708 13:12:54.141881    5145 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem
	I0708 13:12:54.141907    5145 main.go:141] libmachine: Decoding PEM data...
	I0708 13:12:54.141915    5145 main.go:141] libmachine: Parsing certificate...
	I0708 13:12:54.142360    5145 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19195-1270/.minikube/cache/iso/arm64/minikube-v1.33.1-1720011972-19186-arm64.iso...
	I0708 13:12:54.288217    5145 main.go:141] libmachine: Creating SSH key...
	I0708 13:12:54.385880    5145 main.go:141] libmachine: Creating Disk image...
	I0708 13:12:54.385895    5145 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0708 13:12:54.386100    5145 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/calico-305000/disk.qcow2.raw /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/calico-305000/disk.qcow2
	I0708 13:12:54.395277    5145 main.go:141] libmachine: STDOUT: 
	I0708 13:12:54.395295    5145 main.go:141] libmachine: STDERR: 
	I0708 13:12:54.395368    5145 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/calico-305000/disk.qcow2 +20000M
	I0708 13:12:54.403730    5145 main.go:141] libmachine: STDOUT: Image resized.
	
	I0708 13:12:54.403749    5145 main.go:141] libmachine: STDERR: 
	I0708 13:12:54.403770    5145 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/calico-305000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/calico-305000/disk.qcow2
	I0708 13:12:54.403775    5145 main.go:141] libmachine: Starting QEMU VM...
	I0708 13:12:54.403817    5145 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/calico-305000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/calico-305000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/calico-305000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:60:85:39:3c:14 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/calico-305000/disk.qcow2
	I0708 13:12:54.405585    5145 main.go:141] libmachine: STDOUT: 
	I0708 13:12:54.405602    5145 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 13:12:54.405621    5145 client.go:171] duration metric: took 263.878291ms to LocalClient.Create
	I0708 13:12:56.407784    5145 start.go:128] duration metric: took 2.2897825s to createHost
	I0708 13:12:56.407877    5145 start.go:83] releasing machines lock for "calico-305000", held for 2.289981833s
	W0708 13:12:56.407931    5145 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:12:56.418835    5145 out.go:177] * Deleting "calico-305000" in qemu2 ...
	W0708 13:12:56.441452    5145 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:12:56.441483    5145 start.go:728] Will try again in 5 seconds ...
	I0708 13:13:01.443632    5145 start.go:360] acquireMachinesLock for calico-305000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 13:13:01.444094    5145 start.go:364] duration metric: took 361.958µs to acquireMachinesLock for "calico-305000"
	I0708 13:13:01.444158    5145 start.go:93] Provisioning new machine with config: &{Name:calico-305000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:calico-305000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 13:13:01.444423    5145 start.go:125] createHost starting for "" (driver="qemu2")
	I0708 13:13:01.453829    5145 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0708 13:13:01.494836    5145 start.go:159] libmachine.API.Create for "calico-305000" (driver="qemu2")
	I0708 13:13:01.494873    5145 client.go:168] LocalClient.Create starting
	I0708 13:13:01.494976    5145 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem
	I0708 13:13:01.495036    5145 main.go:141] libmachine: Decoding PEM data...
	I0708 13:13:01.495050    5145 main.go:141] libmachine: Parsing certificate...
	I0708 13:13:01.495107    5145 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem
	I0708 13:13:01.495158    5145 main.go:141] libmachine: Decoding PEM data...
	I0708 13:13:01.495172    5145 main.go:141] libmachine: Parsing certificate...
	I0708 13:13:01.495697    5145 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19195-1270/.minikube/cache/iso/arm64/minikube-v1.33.1-1720011972-19186-arm64.iso...
	I0708 13:13:01.646433    5145 main.go:141] libmachine: Creating SSH key...
	I0708 13:13:01.699122    5145 main.go:141] libmachine: Creating Disk image...
	I0708 13:13:01.699128    5145 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0708 13:13:01.699322    5145 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/calico-305000/disk.qcow2.raw /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/calico-305000/disk.qcow2
	I0708 13:13:01.708631    5145 main.go:141] libmachine: STDOUT: 
	I0708 13:13:01.708648    5145 main.go:141] libmachine: STDERR: 
	I0708 13:13:01.708697    5145 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/calico-305000/disk.qcow2 +20000M
	I0708 13:13:01.716564    5145 main.go:141] libmachine: STDOUT: Image resized.
	
	I0708 13:13:01.716579    5145 main.go:141] libmachine: STDERR: 
	I0708 13:13:01.716591    5145 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/calico-305000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/calico-305000/disk.qcow2
	I0708 13:13:01.716596    5145 main.go:141] libmachine: Starting QEMU VM...
	I0708 13:13:01.716639    5145 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/calico-305000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/calico-305000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/calico-305000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:20:02:8a:37:86 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/calico-305000/disk.qcow2
	I0708 13:13:01.718408    5145 main.go:141] libmachine: STDOUT: 
	I0708 13:13:01.718421    5145 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 13:13:01.718434    5145 client.go:171] duration metric: took 223.564792ms to LocalClient.Create
	I0708 13:13:03.720459    5145 start.go:128] duration metric: took 2.276093959s to createHost
	I0708 13:13:03.720505    5145 start.go:83] releasing machines lock for "calico-305000", held for 2.276471291s
	W0708 13:13:03.720631    5145 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-305000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-305000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:13:03.728851    5145 out.go:177] 
	W0708 13:13:03.735972    5145 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0708 13:13:03.735982    5145 out.go:239] * 
	* 
	W0708 13:13:03.736790    5145 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 13:13:03.748009    5145 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-305000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-305000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.821801083s)

                                                
                                                
-- stdout --
	* [false-305000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-305000" primary control-plane node in "false-305000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-305000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 13:13:06.106465    5267 out.go:291] Setting OutFile to fd 1 ...
	I0708 13:13:06.106599    5267 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:13:06.106602    5267 out.go:304] Setting ErrFile to fd 2...
	I0708 13:13:06.106604    5267 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:13:06.106728    5267 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 13:13:06.107866    5267 out.go:298] Setting JSON to false
	I0708 13:13:06.124169    5267 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4354,"bootTime":1720465232,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0708 13:13:06.124236    5267 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0708 13:13:06.129992    5267 out.go:177] * [false-305000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0708 13:13:06.137843    5267 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 13:13:06.137871    5267 notify.go:220] Checking for updates...
	I0708 13:13:06.144799    5267 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 13:13:06.147875    5267 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0708 13:13:06.150847    5267 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 13:13:06.152141    5267 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	I0708 13:13:06.154821    5267 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 13:13:06.158169    5267 config.go:182] Loaded profile config "multinode-969000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 13:13:06.158240    5267 config.go:182] Loaded profile config "stopped-upgrade-170000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0708 13:13:06.158292    5267 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 13:13:06.162663    5267 out.go:177] * Using the qemu2 driver based on user configuration
	I0708 13:13:06.169870    5267 start.go:297] selected driver: qemu2
	I0708 13:13:06.169877    5267 start.go:901] validating driver "qemu2" against <nil>
	I0708 13:13:06.169884    5267 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 13:13:06.172118    5267 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0708 13:13:06.174849    5267 out.go:177] * Automatically selected the socket_vmnet network
	I0708 13:13:06.177921    5267 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 13:13:06.177935    5267 cni.go:84] Creating CNI manager for "false"
	I0708 13:13:06.177959    5267 start.go:340] cluster config:
	{Name:false-305000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:false-305000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 13:13:06.181473    5267 iso.go:125] acquiring lock: {Name:mk0270d312faa6a295feea241390baaf586d8510 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 13:13:06.187787    5267 out.go:177] * Starting "false-305000" primary control-plane node in "false-305000" cluster
	I0708 13:13:06.191824    5267 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0708 13:13:06.191840    5267 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0708 13:13:06.191846    5267 cache.go:56] Caching tarball of preloaded images
	I0708 13:13:06.191902    5267 preload.go:173] Found /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0708 13:13:06.191907    5267 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0708 13:13:06.191972    5267 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/false-305000/config.json ...
	I0708 13:13:06.191992    5267 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/false-305000/config.json: {Name:mk5644b8ed5a2e377d1998355c574995eab2fda6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 13:13:06.192330    5267 start.go:360] acquireMachinesLock for false-305000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 13:13:06.192360    5267 start.go:364] duration metric: took 24.458µs to acquireMachinesLock for "false-305000"
	I0708 13:13:06.192369    5267 start.go:93] Provisioning new machine with config: &{Name:false-305000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:false-305000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 13:13:06.192395    5267 start.go:125] createHost starting for "" (driver="qemu2")
	I0708 13:13:06.199809    5267 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0708 13:13:06.214881    5267 start.go:159] libmachine.API.Create for "false-305000" (driver="qemu2")
	I0708 13:13:06.214908    5267 client.go:168] LocalClient.Create starting
	I0708 13:13:06.214977    5267 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem
	I0708 13:13:06.215007    5267 main.go:141] libmachine: Decoding PEM data...
	I0708 13:13:06.215018    5267 main.go:141] libmachine: Parsing certificate...
	I0708 13:13:06.215059    5267 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem
	I0708 13:13:06.215085    5267 main.go:141] libmachine: Decoding PEM data...
	I0708 13:13:06.215093    5267 main.go:141] libmachine: Parsing certificate...
	I0708 13:13:06.215566    5267 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19195-1270/.minikube/cache/iso/arm64/minikube-v1.33.1-1720011972-19186-arm64.iso...
	I0708 13:13:06.359748    5267 main.go:141] libmachine: Creating SSH key...
	I0708 13:13:06.444018    5267 main.go:141] libmachine: Creating Disk image...
	I0708 13:13:06.444030    5267 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0708 13:13:06.444264    5267 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/false-305000/disk.qcow2.raw /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/false-305000/disk.qcow2
	I0708 13:13:06.453380    5267 main.go:141] libmachine: STDOUT: 
	I0708 13:13:06.453401    5267 main.go:141] libmachine: STDERR: 
	I0708 13:13:06.453448    5267 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/false-305000/disk.qcow2 +20000M
	I0708 13:13:06.461421    5267 main.go:141] libmachine: STDOUT: Image resized.
	
	I0708 13:13:06.461435    5267 main.go:141] libmachine: STDERR: 
	I0708 13:13:06.461447    5267 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/false-305000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/false-305000/disk.qcow2
	I0708 13:13:06.461451    5267 main.go:141] libmachine: Starting QEMU VM...
	I0708 13:13:06.461477    5267 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/false-305000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/false-305000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/false-305000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:3b:ed:bf:15:fb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/false-305000/disk.qcow2
	I0708 13:13:06.463205    5267 main.go:141] libmachine: STDOUT: 
	I0708 13:13:06.463220    5267 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 13:13:06.463236    5267 client.go:171] duration metric: took 248.331666ms to LocalClient.Create
	I0708 13:13:08.465355    5267 start.go:128] duration metric: took 2.273002709s to createHost
	I0708 13:13:08.465416    5267 start.go:83] releasing machines lock for "false-305000", held for 2.2731245s
	W0708 13:13:08.465498    5267 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:13:08.476393    5267 out.go:177] * Deleting "false-305000" in qemu2 ...
	W0708 13:13:08.498822    5267 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:13:08.498835    5267 start.go:728] Will try again in 5 seconds ...
	I0708 13:13:13.500948    5267 start.go:360] acquireMachinesLock for false-305000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 13:13:13.501562    5267 start.go:364] duration metric: took 489.875µs to acquireMachinesLock for "false-305000"
	I0708 13:13:13.501630    5267 start.go:93] Provisioning new machine with config: &{Name:false-305000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:false-305000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 13:13:13.501964    5267 start.go:125] createHost starting for "" (driver="qemu2")
	I0708 13:13:13.511594    5267 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0708 13:13:13.561548    5267 start.go:159] libmachine.API.Create for "false-305000" (driver="qemu2")
	I0708 13:13:13.561605    5267 client.go:168] LocalClient.Create starting
	I0708 13:13:13.561730    5267 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem
	I0708 13:13:13.561807    5267 main.go:141] libmachine: Decoding PEM data...
	I0708 13:13:13.561820    5267 main.go:141] libmachine: Parsing certificate...
	I0708 13:13:13.561893    5267 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem
	I0708 13:13:13.561938    5267 main.go:141] libmachine: Decoding PEM data...
	I0708 13:13:13.561948    5267 main.go:141] libmachine: Parsing certificate...
	I0708 13:13:13.562486    5267 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19195-1270/.minikube/cache/iso/arm64/minikube-v1.33.1-1720011972-19186-arm64.iso...
	I0708 13:13:13.724603    5267 main.go:141] libmachine: Creating SSH key...
	I0708 13:13:13.838814    5267 main.go:141] libmachine: Creating Disk image...
	I0708 13:13:13.838825    5267 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0708 13:13:13.839048    5267 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/false-305000/disk.qcow2.raw /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/false-305000/disk.qcow2
	I0708 13:13:13.848627    5267 main.go:141] libmachine: STDOUT: 
	I0708 13:13:13.848648    5267 main.go:141] libmachine: STDERR: 
	I0708 13:13:13.848724    5267 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/false-305000/disk.qcow2 +20000M
	I0708 13:13:13.856736    5267 main.go:141] libmachine: STDOUT: Image resized.
	
	I0708 13:13:13.856752    5267 main.go:141] libmachine: STDERR: 
	I0708 13:13:13.856763    5267 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/false-305000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/false-305000/disk.qcow2
	I0708 13:13:13.856767    5267 main.go:141] libmachine: Starting QEMU VM...
	I0708 13:13:13.856805    5267 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/false-305000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/false-305000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/false-305000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:90:b9:b3:ea:2a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/false-305000/disk.qcow2
	I0708 13:13:13.858442    5267 main.go:141] libmachine: STDOUT: 
	I0708 13:13:13.858475    5267 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 13:13:13.858496    5267 client.go:171] duration metric: took 296.893417ms to LocalClient.Create
	I0708 13:13:15.860612    5267 start.go:128] duration metric: took 2.3586915s to createHost
	I0708 13:13:15.860681    5267 start.go:83] releasing machines lock for "false-305000", held for 2.35917375s
	W0708 13:13:15.860977    5267 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-305000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-305000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:13:15.873543    5267 out.go:177] 
	W0708 13:13:15.877582    5267 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0708 13:13:15.877598    5267 out.go:239] * 
	* 
	W0708 13:13:15.878880    5267 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 13:13:15.890594    5267 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-376000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-376000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.67065475s)

                                                
                                                
-- stdout --
	* [old-k8s-version-376000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-376000" primary control-plane node in "old-k8s-version-376000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-376000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 13:13:18.052026    5380 out.go:291] Setting OutFile to fd 1 ...
	I0708 13:13:18.052171    5380 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:13:18.052177    5380 out.go:304] Setting ErrFile to fd 2...
	I0708 13:13:18.052180    5380 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:13:18.052338    5380 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 13:13:18.053467    5380 out.go:298] Setting JSON to false
	I0708 13:13:18.070055    5380 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4366,"bootTime":1720465232,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0708 13:13:18.070119    5380 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0708 13:13:18.077366    5380 out.go:177] * [old-k8s-version-376000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0708 13:13:18.085282    5380 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 13:13:18.085326    5380 notify.go:220] Checking for updates...
	I0708 13:13:18.092390    5380 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 13:13:18.093833    5380 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0708 13:13:18.097359    5380 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 13:13:18.100417    5380 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	I0708 13:13:18.103388    5380 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 13:13:18.106649    5380 config.go:182] Loaded profile config "multinode-969000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 13:13:18.106717    5380 config.go:182] Loaded profile config "stopped-upgrade-170000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0708 13:13:18.106768    5380 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 13:13:18.110345    5380 out.go:177] * Using the qemu2 driver based on user configuration
	I0708 13:13:18.117325    5380 start.go:297] selected driver: qemu2
	I0708 13:13:18.117331    5380 start.go:901] validating driver "qemu2" against <nil>
	I0708 13:13:18.117336    5380 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 13:13:18.119638    5380 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0708 13:13:18.122394    5380 out.go:177] * Automatically selected the socket_vmnet network
	I0708 13:13:18.125499    5380 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 13:13:18.125544    5380 cni.go:84] Creating CNI manager for ""
	I0708 13:13:18.125555    5380 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0708 13:13:18.125599    5380 start.go:340] cluster config:
	{Name:old-k8s-version-376000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-376000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 13:13:18.129234    5380 iso.go:125] acquiring lock: {Name:mk0270d312faa6a295feea241390baaf586d8510 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 13:13:18.132410    5380 out.go:177] * Starting "old-k8s-version-376000" primary control-plane node in "old-k8s-version-376000" cluster
	I0708 13:13:18.139332    5380 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0708 13:13:18.139345    5380 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0708 13:13:18.139351    5380 cache.go:56] Caching tarball of preloaded images
	I0708 13:13:18.139403    5380 preload.go:173] Found /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0708 13:13:18.139408    5380 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0708 13:13:18.139470    5380 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/old-k8s-version-376000/config.json ...
	I0708 13:13:18.139481    5380 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/old-k8s-version-376000/config.json: {Name:mk5eddfe93e103e6e6e87941fde59420fe7584d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 13:13:18.139808    5380 start.go:360] acquireMachinesLock for old-k8s-version-376000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 13:13:18.139843    5380 start.go:364] duration metric: took 27.75µs to acquireMachinesLock for "old-k8s-version-376000"
	I0708 13:13:18.139856    5380 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-376000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-376000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 13:13:18.139884    5380 start.go:125] createHost starting for "" (driver="qemu2")
	I0708 13:13:18.148187    5380 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0708 13:13:18.165870    5380 start.go:159] libmachine.API.Create for "old-k8s-version-376000" (driver="qemu2")
	I0708 13:13:18.165901    5380 client.go:168] LocalClient.Create starting
	I0708 13:13:18.165962    5380 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem
	I0708 13:13:18.165995    5380 main.go:141] libmachine: Decoding PEM data...
	I0708 13:13:18.166005    5380 main.go:141] libmachine: Parsing certificate...
	I0708 13:13:18.166048    5380 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem
	I0708 13:13:18.166075    5380 main.go:141] libmachine: Decoding PEM data...
	I0708 13:13:18.166082    5380 main.go:141] libmachine: Parsing certificate...
	I0708 13:13:18.166513    5380 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19195-1270/.minikube/cache/iso/arm64/minikube-v1.33.1-1720011972-19186-arm64.iso...
	I0708 13:13:18.311818    5380 main.go:141] libmachine: Creating SSH key...
	I0708 13:13:18.343632    5380 main.go:141] libmachine: Creating Disk image...
	I0708 13:13:18.343637    5380 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0708 13:13:18.343836    5380 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/old-k8s-version-376000/disk.qcow2.raw /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/old-k8s-version-376000/disk.qcow2
	I0708 13:13:18.353141    5380 main.go:141] libmachine: STDOUT: 
	I0708 13:13:18.353160    5380 main.go:141] libmachine: STDERR: 
	I0708 13:13:18.353227    5380 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/old-k8s-version-376000/disk.qcow2 +20000M
	I0708 13:13:18.361316    5380 main.go:141] libmachine: STDOUT: Image resized.
	
	I0708 13:13:18.361337    5380 main.go:141] libmachine: STDERR: 
	I0708 13:13:18.361352    5380 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/old-k8s-version-376000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/old-k8s-version-376000/disk.qcow2
	I0708 13:13:18.361360    5380 main.go:141] libmachine: Starting QEMU VM...
	I0708 13:13:18.361397    5380 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/old-k8s-version-376000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/old-k8s-version-376000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/old-k8s-version-376000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:51:c6:f2:4e:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/old-k8s-version-376000/disk.qcow2
	I0708 13:13:18.363080    5380 main.go:141] libmachine: STDOUT: 
	I0708 13:13:18.363096    5380 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 13:13:18.363113    5380 client.go:171] duration metric: took 197.214834ms to LocalClient.Create
	I0708 13:13:20.365305    5380 start.go:128] duration metric: took 2.225466041s to createHost
	I0708 13:13:20.365401    5380 start.go:83] releasing machines lock for "old-k8s-version-376000", held for 2.225621459s
	W0708 13:13:20.365486    5380 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:13:20.383536    5380 out.go:177] * Deleting "old-k8s-version-376000" in qemu2 ...
	W0708 13:13:20.408694    5380 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:13:20.408720    5380 start.go:728] Will try again in 5 seconds ...
	I0708 13:13:25.410085    5380 start.go:360] acquireMachinesLock for old-k8s-version-376000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 13:13:25.410604    5380 start.go:364] duration metric: took 404.458µs to acquireMachinesLock for "old-k8s-version-376000"
	I0708 13:13:25.410719    5380 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-376000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-376000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 13:13:25.410953    5380 start.go:125] createHost starting for "" (driver="qemu2")
	I0708 13:13:25.418724    5380 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0708 13:13:25.451523    5380 start.go:159] libmachine.API.Create for "old-k8s-version-376000" (driver="qemu2")
	I0708 13:13:25.451565    5380 client.go:168] LocalClient.Create starting
	I0708 13:13:25.451667    5380 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem
	I0708 13:13:25.451718    5380 main.go:141] libmachine: Decoding PEM data...
	I0708 13:13:25.451732    5380 main.go:141] libmachine: Parsing certificate...
	I0708 13:13:25.451783    5380 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem
	I0708 13:13:25.451817    5380 main.go:141] libmachine: Decoding PEM data...
	I0708 13:13:25.451827    5380 main.go:141] libmachine: Parsing certificate...
	I0708 13:13:25.452284    5380 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19195-1270/.minikube/cache/iso/arm64/minikube-v1.33.1-1720011972-19186-arm64.iso...
	I0708 13:13:25.599467    5380 main.go:141] libmachine: Creating SSH key...
	I0708 13:13:25.639119    5380 main.go:141] libmachine: Creating Disk image...
	I0708 13:13:25.639124    5380 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0708 13:13:25.639322    5380 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/old-k8s-version-376000/disk.qcow2.raw /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/old-k8s-version-376000/disk.qcow2
	I0708 13:13:25.648820    5380 main.go:141] libmachine: STDOUT: 
	I0708 13:13:25.648836    5380 main.go:141] libmachine: STDERR: 
	I0708 13:13:25.648889    5380 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/old-k8s-version-376000/disk.qcow2 +20000M
	I0708 13:13:25.656676    5380 main.go:141] libmachine: STDOUT: Image resized.
	
	I0708 13:13:25.656702    5380 main.go:141] libmachine: STDERR: 
	I0708 13:13:25.656716    5380 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/old-k8s-version-376000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/old-k8s-version-376000/disk.qcow2
	I0708 13:13:25.656720    5380 main.go:141] libmachine: Starting QEMU VM...
	I0708 13:13:25.656749    5380 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/old-k8s-version-376000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/old-k8s-version-376000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/old-k8s-version-376000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:66:68:e9:4a:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/old-k8s-version-376000/disk.qcow2
	I0708 13:13:25.658392    5380 main.go:141] libmachine: STDOUT: 
	I0708 13:13:25.658407    5380 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 13:13:25.658420    5380 client.go:171] duration metric: took 206.856791ms to LocalClient.Create
	I0708 13:13:27.659681    5380 start.go:128] duration metric: took 2.248781958s to createHost
	I0708 13:13:27.659723    5380 start.go:83] releasing machines lock for "old-k8s-version-376000", held for 2.249152417s
	W0708 13:13:27.659889    5380 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-376000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-376000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:13:27.669213    5380 out.go:177] 
	W0708 13:13:27.674102    5380 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0708 13:13:27.674125    5380 out.go:239] * 
	* 
	W0708 13:13:27.675031    5380 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 13:13:27.685185    5380 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-376000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-376000 -n old-k8s-version-376000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-376000 -n old-k8s-version-376000: exit status 7 (34.088916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-376000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-376000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-376000 create -f testdata/busybox.yaml: exit status 1 (27.463625ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-376000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-376000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-376000 -n old-k8s-version-376000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-376000 -n old-k8s-version-376000: exit status 7 (32.003708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-376000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-376000 -n old-k8s-version-376000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-376000 -n old-k8s-version-376000: exit status 7 (33.067166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-376000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-376000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-376000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-376000 describe deploy/metrics-server -n kube-system: exit status 1 (28.090584ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-376000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-376000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-376000 -n old-k8s-version-376000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-376000 -n old-k8s-version-376000: exit status 7 (30.567416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-376000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-376000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-376000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.180418s)

                                                
                                                
-- stdout --
	* [old-k8s-version-376000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-376000" primary control-plane node in "old-k8s-version-376000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-376000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-376000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 13:13:29.997113    5426 out.go:291] Setting OutFile to fd 1 ...
	I0708 13:13:29.997246    5426 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:13:29.997249    5426 out.go:304] Setting ErrFile to fd 2...
	I0708 13:13:29.997251    5426 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:13:29.997393    5426 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 13:13:29.998462    5426 out.go:298] Setting JSON to false
	I0708 13:13:30.014946    5426 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4378,"bootTime":1720465232,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0708 13:13:30.015020    5426 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0708 13:13:30.019780    5426 out.go:177] * [old-k8s-version-376000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0708 13:13:30.027781    5426 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 13:13:30.027827    5426 notify.go:220] Checking for updates...
	I0708 13:13:30.034757    5426 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 13:13:30.037769    5426 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0708 13:13:30.040769    5426 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 13:13:30.043768    5426 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	I0708 13:13:30.046710    5426 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 13:13:30.050006    5426 config.go:182] Loaded profile config "old-k8s-version-376000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0708 13:13:30.052751    5426 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0708 13:13:30.055797    5426 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 13:13:30.059719    5426 out.go:177] * Using the qemu2 driver based on existing profile
	I0708 13:13:30.066683    5426 start.go:297] selected driver: qemu2
	I0708 13:13:30.066688    5426 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-376000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-376000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 13:13:30.066738    5426 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 13:13:30.068957    5426 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 13:13:30.068993    5426 cni.go:84] Creating CNI manager for ""
	I0708 13:13:30.069001    5426 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0708 13:13:30.069022    5426 start.go:340] cluster config:
	{Name:old-k8s-version-376000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-376000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 13:13:30.072465    5426 iso.go:125] acquiring lock: {Name:mk0270d312faa6a295feea241390baaf586d8510 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 13:13:30.079770    5426 out.go:177] * Starting "old-k8s-version-376000" primary control-plane node in "old-k8s-version-376000" cluster
	I0708 13:13:30.083782    5426 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0708 13:13:30.083797    5426 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0708 13:13:30.083809    5426 cache.go:56] Caching tarball of preloaded images
	I0708 13:13:30.083872    5426 preload.go:173] Found /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0708 13:13:30.083876    5426 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0708 13:13:30.083929    5426 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/old-k8s-version-376000/config.json ...
	I0708 13:13:30.084387    5426 start.go:360] acquireMachinesLock for old-k8s-version-376000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 13:13:30.084413    5426 start.go:364] duration metric: took 20.5µs to acquireMachinesLock for "old-k8s-version-376000"
	I0708 13:13:30.084421    5426 start.go:96] Skipping create...Using existing machine configuration
	I0708 13:13:30.084428    5426 fix.go:54] fixHost starting: 
	I0708 13:13:30.084539    5426 fix.go:112] recreateIfNeeded on old-k8s-version-376000: state=Stopped err=<nil>
	W0708 13:13:30.084548    5426 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 13:13:30.087634    5426 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-376000" ...
	I0708 13:13:30.095734    5426 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/old-k8s-version-376000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/old-k8s-version-376000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/old-k8s-version-376000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:66:68:e9:4a:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/old-k8s-version-376000/disk.qcow2
	I0708 13:13:30.097552    5426 main.go:141] libmachine: STDOUT: 
	I0708 13:13:30.097567    5426 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 13:13:30.097592    5426 fix.go:56] duration metric: took 13.165083ms for fixHost
	I0708 13:13:30.097597    5426 start.go:83] releasing machines lock for "old-k8s-version-376000", held for 13.1815ms
	W0708 13:13:30.097602    5426 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0708 13:13:30.097630    5426 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:13:30.097634    5426 start.go:728] Will try again in 5 seconds ...
	I0708 13:13:35.099640    5426 start.go:360] acquireMachinesLock for old-k8s-version-376000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 13:13:35.100010    5426 start.go:364] duration metric: took 260.834µs to acquireMachinesLock for "old-k8s-version-376000"
	I0708 13:13:35.100106    5426 start.go:96] Skipping create...Using existing machine configuration
	I0708 13:13:35.100120    5426 fix.go:54] fixHost starting: 
	I0708 13:13:35.100605    5426 fix.go:112] recreateIfNeeded on old-k8s-version-376000: state=Stopped err=<nil>
	W0708 13:13:35.100624    5426 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 13:13:35.104988    5426 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-376000" ...
	I0708 13:13:35.109046    5426 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/old-k8s-version-376000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/old-k8s-version-376000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/old-k8s-version-376000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:66:68:e9:4a:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/old-k8s-version-376000/disk.qcow2
	I0708 13:13:35.116205    5426 main.go:141] libmachine: STDOUT: 
	I0708 13:13:35.116252    5426 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 13:13:35.116310    5426 fix.go:56] duration metric: took 16.19075ms for fixHost
	I0708 13:13:35.116325    5426 start.go:83] releasing machines lock for "old-k8s-version-376000", held for 16.266541ms
	W0708 13:13:35.116442    5426 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-376000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-376000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:13:35.122990    5426 out.go:177] 
	W0708 13:13:35.127030    5426 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0708 13:13:35.127062    5426 out.go:239] * 
	* 
	W0708 13:13:35.128313    5426 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 13:13:35.137912    5426 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-376000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-376000 -n old-k8s-version-376000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-376000 -n old-k8s-version-376000: exit status 7 (57.838333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-376000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-376000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-376000 -n old-k8s-version-376000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-376000 -n old-k8s-version-376000: exit status 7 (30.4285ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-376000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-376000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-376000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-376000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (28.163667ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-376000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-376000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-376000 -n old-k8s-version-376000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-376000 -n old-k8s-version-376000: exit status 7 (29.609792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-376000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-376000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-376000 -n old-k8s-version-376000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-376000 -n old-k8s-version-376000: exit status 7 (29.03225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-376000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-376000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-376000 --alsologtostderr -v=1: exit status 83 (39.336125ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-376000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-376000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 13:13:35.396915    5445 out.go:291] Setting OutFile to fd 1 ...
	I0708 13:13:35.397793    5445 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:13:35.397800    5445 out.go:304] Setting ErrFile to fd 2...
	I0708 13:13:35.397802    5445 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:13:35.397968    5445 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 13:13:35.398210    5445 out.go:298] Setting JSON to false
	I0708 13:13:35.398220    5445 mustload.go:65] Loading cluster: old-k8s-version-376000
	I0708 13:13:35.398430    5445 config.go:182] Loaded profile config "old-k8s-version-376000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0708 13:13:35.403191    5445 out.go:177] * The control-plane node old-k8s-version-376000 host is not running: state=Stopped
	I0708 13:13:35.404466    5445 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-376000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-376000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-376000 -n old-k8s-version-376000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-376000 -n old-k8s-version-376000: exit status 7 (28.5815ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-376000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-376000 -n old-k8s-version-376000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-376000 -n old-k8s-version-376000: exit status 7 (29.595708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-376000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-172000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-172000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.2: exit status 80 (9.986389417s)

                                                
                                                
-- stdout --
	* [no-preload-172000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-172000" primary control-plane node in "no-preload-172000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-172000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 13:13:35.703202    5462 out.go:291] Setting OutFile to fd 1 ...
	I0708 13:13:35.703339    5462 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:13:35.703342    5462 out.go:304] Setting ErrFile to fd 2...
	I0708 13:13:35.703344    5462 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:13:35.703485    5462 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 13:13:35.704571    5462 out.go:298] Setting JSON to false
	I0708 13:13:35.720444    5462 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4383,"bootTime":1720465232,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0708 13:13:35.720521    5462 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0708 13:13:35.725104    5462 out.go:177] * [no-preload-172000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0708 13:13:35.731084    5462 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 13:13:35.731191    5462 notify.go:220] Checking for updates...
	I0708 13:13:35.739059    5462 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 13:13:35.743100    5462 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0708 13:13:35.745945    5462 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 13:13:35.749044    5462 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	I0708 13:13:35.752026    5462 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 13:13:35.753815    5462 config.go:182] Loaded profile config "multinode-969000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 13:13:35.753878    5462 config.go:182] Loaded profile config "stopped-upgrade-170000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0708 13:13:35.753932    5462 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 13:13:35.758061    5462 out.go:177] * Using the qemu2 driver based on user configuration
	I0708 13:13:35.764921    5462 start.go:297] selected driver: qemu2
	I0708 13:13:35.764928    5462 start.go:901] validating driver "qemu2" against <nil>
	I0708 13:13:35.764935    5462 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 13:13:35.767196    5462 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0708 13:13:35.769984    5462 out.go:177] * Automatically selected the socket_vmnet network
	I0708 13:13:35.774171    5462 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 13:13:35.774223    5462 cni.go:84] Creating CNI manager for ""
	I0708 13:13:35.774235    5462 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0708 13:13:35.774239    5462 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0708 13:13:35.774294    5462 start.go:340] cluster config:
	{Name:no-preload-172000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:no-preload-172000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 13:13:35.778451    5462 iso.go:125] acquiring lock: {Name:mk0270d312faa6a295feea241390baaf586d8510 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 13:13:35.786004    5462 out.go:177] * Starting "no-preload-172000" primary control-plane node in "no-preload-172000" cluster
	I0708 13:13:35.790070    5462 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0708 13:13:35.790171    5462 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/no-preload-172000/config.json ...
	I0708 13:13:35.790179    5462 cache.go:107] acquiring lock: {Name:mk48eaa7950e96669e6f1d9da14b3b30130cdc0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 13:13:35.790203    5462 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/no-preload-172000/config.json: {Name:mka963c0c637946f157e4dc27b832b0dfe1351b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 13:13:35.790224    5462 cache.go:107] acquiring lock: {Name:mkdafd497465fc2943c37f836ad83e8d95caffbe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 13:13:35.790239    5462 cache.go:107] acquiring lock: {Name:mkdd58986f390eceea6b2b3f4f0dba958421ad62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 13:13:35.790293    5462 cache.go:115] /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0708 13:13:35.790300    5462 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 126.209µs
	I0708 13:13:35.790306    5462 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0708 13:13:35.790371    5462 cache.go:107] acquiring lock: {Name:mk6cfcb5ec6cbe6a9123165e565261bcd41f23f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 13:13:35.790411    5462 cache.go:107] acquiring lock: {Name:mkcf30ac866c6fd660f58a47b540f2be21e4e364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 13:13:35.790535    5462 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0708 13:13:35.790619    5462 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0708 13:13:35.790599    5462 cache.go:107] acquiring lock: {Name:mkae5c5e921428fcb2a41e76cffeb3f2ca669259 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 13:13:35.790627    5462 cache.go:107] acquiring lock: {Name:mk2738b766c65d9ab473cf07ed12b9b8c9309344 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 13:13:35.790662    5462 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.2
	I0708 13:13:35.790699    5462 cache.go:107] acquiring lock: {Name:mk6deaa588e28385ded8f13fc509d78c0ad604d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 13:13:35.790775    5462 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.2
	I0708 13:13:35.790797    5462 start.go:360] acquireMachinesLock for no-preload-172000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 13:13:35.790830    5462 start.go:364] duration metric: took 28.166µs to acquireMachinesLock for "no-preload-172000"
	I0708 13:13:35.790842    5462 start.go:93] Provisioning new machine with config: &{Name:no-preload-172000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:no-preload-172000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 13:13:35.790867    5462 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0708 13:13:35.790872    5462 start.go:125] createHost starting for "" (driver="qemu2")
	I0708 13:13:35.790887    5462 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.2
	I0708 13:13:35.790955    5462 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.2
	I0708 13:13:35.794086    5462 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0708 13:13:35.798251    5462 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0708 13:13:35.801188    5462 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.2
	I0708 13:13:35.801255    5462 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.2
	I0708 13:13:35.801256    5462 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0708 13:13:35.801278    5462 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.2
	I0708 13:13:35.801388    5462 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.2
	I0708 13:13:35.801477    5462 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0708 13:13:35.810214    5462 start.go:159] libmachine.API.Create for "no-preload-172000" (driver="qemu2")
	I0708 13:13:35.810235    5462 client.go:168] LocalClient.Create starting
	I0708 13:13:35.810299    5462 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem
	I0708 13:13:35.810330    5462 main.go:141] libmachine: Decoding PEM data...
	I0708 13:13:35.810338    5462 main.go:141] libmachine: Parsing certificate...
	I0708 13:13:35.810376    5462 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem
	I0708 13:13:35.810403    5462 main.go:141] libmachine: Decoding PEM data...
	I0708 13:13:35.810413    5462 main.go:141] libmachine: Parsing certificate...
	I0708 13:13:35.810749    5462 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19195-1270/.minikube/cache/iso/arm64/minikube-v1.33.1-1720011972-19186-arm64.iso...
	I0708 13:13:35.962890    5462 main.go:141] libmachine: Creating SSH key...
	I0708 13:13:36.219220    5462 cache.go:162] opening:  /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.2
	I0708 13:13:36.220085    5462 cache.go:162] opening:  /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0708 13:13:36.220505    5462 cache.go:162] opening:  /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0
	I0708 13:13:36.231502    5462 main.go:141] libmachine: Creating Disk image...
	I0708 13:13:36.231512    5462 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0708 13:13:36.231723    5462 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/no-preload-172000/disk.qcow2.raw /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/no-preload-172000/disk.qcow2
	I0708 13:13:36.241325    5462 main.go:141] libmachine: STDOUT: 
	I0708 13:13:36.241343    5462 main.go:141] libmachine: STDERR: 
	I0708 13:13:36.241391    5462 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/no-preload-172000/disk.qcow2 +20000M
	I0708 13:13:36.250046    5462 main.go:141] libmachine: STDOUT: Image resized.
	
	I0708 13:13:36.250070    5462 main.go:141] libmachine: STDERR: 
	I0708 13:13:36.250092    5462 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/no-preload-172000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/no-preload-172000/disk.qcow2
	I0708 13:13:36.250097    5462 main.go:141] libmachine: Starting QEMU VM...
	I0708 13:13:36.250133    5462 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/no-preload-172000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/no-preload-172000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/no-preload-172000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:14:aa:77:b0:0b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/no-preload-172000/disk.qcow2
	I0708 13:13:36.252007    5462 main.go:141] libmachine: STDOUT: 
	I0708 13:13:36.252043    5462 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 13:13:36.252062    5462 client.go:171] duration metric: took 441.836584ms to LocalClient.Create
	I0708 13:13:36.259775    5462 cache.go:162] opening:  /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.2
	I0708 13:13:36.305623    5462 cache.go:162] opening:  /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.2
	I0708 13:13:36.333173    5462 cache.go:162] opening:  /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0708 13:13:36.468753    5462 cache.go:157] /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0708 13:13:36.468781    5462 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 678.577208ms
	I0708 13:13:36.468795    5462 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0708 13:13:37.005934    5462 cache.go:162] opening:  /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.2
	I0708 13:13:38.252110    5462 start.go:128] duration metric: took 2.461309167s to createHost
	I0708 13:13:38.252142    5462 start.go:83] releasing machines lock for "no-preload-172000", held for 2.461388833s
	W0708 13:13:38.252165    5462 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:13:38.261544    5462 out.go:177] * Deleting "no-preload-172000" in qemu2 ...
	W0708 13:13:38.276158    5462 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:13:38.276173    5462 start.go:728] Will try again in 5 seconds ...
	I0708 13:13:39.108394    5462 cache.go:157] /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0708 13:13:39.108418    5462 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 3.317849875s
	I0708 13:13:39.108426    5462 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0708 13:13:39.627926    5462 cache.go:157] /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.2 exists
	I0708 13:13:39.627942    5462 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.2" -> "/Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.2" took 3.837847959s
	I0708 13:13:39.627949    5462 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.2 -> /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.2 succeeded
	I0708 13:13:40.111204    5462 cache.go:157] /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.2 exists
	I0708 13:13:40.111218    5462 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.2" -> "/Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.2" took 4.320778458s
	I0708 13:13:40.111229    5462 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.2 -> /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.2 succeeded
	I0708 13:13:40.283033    5462 cache.go:157] /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.2 exists
	I0708 13:13:40.283053    5462 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.2" -> "/Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.2" took 4.492921375s
	I0708 13:13:40.283064    5462 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.2 -> /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.2 succeeded
	I0708 13:13:40.805175    5462 cache.go:157] /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.2 exists
	I0708 13:13:40.805215    5462 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.2" -> "/Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.2" took 5.014801s
	I0708 13:13:40.805230    5462 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.2 -> /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.2 succeeded
	I0708 13:13:42.815447    5462 cache.go:157] /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 exists
	I0708 13:13:42.815510    5462 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0" took 7.025376708s
	I0708 13:13:42.815537    5462 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0708 13:13:42.815595    5462 cache.go:87] Successfully saved all images to host disk.
	I0708 13:13:43.277534    5462 start.go:360] acquireMachinesLock for no-preload-172000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 13:13:43.278141    5462 start.go:364] duration metric: took 512.584µs to acquireMachinesLock for "no-preload-172000"
	I0708 13:13:43.278290    5462 start.go:93] Provisioning new machine with config: &{Name:no-preload-172000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:no-preload-172000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 13:13:43.278568    5462 start.go:125] createHost starting for "" (driver="qemu2")
	I0708 13:13:43.286995    5462 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0708 13:13:43.339597    5462 start.go:159] libmachine.API.Create for "no-preload-172000" (driver="qemu2")
	I0708 13:13:43.339651    5462 client.go:168] LocalClient.Create starting
	I0708 13:13:43.339762    5462 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem
	I0708 13:13:43.339845    5462 main.go:141] libmachine: Decoding PEM data...
	I0708 13:13:43.339862    5462 main.go:141] libmachine: Parsing certificate...
	I0708 13:13:43.339932    5462 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem
	I0708 13:13:43.339982    5462 main.go:141] libmachine: Decoding PEM data...
	I0708 13:13:43.339994    5462 main.go:141] libmachine: Parsing certificate...
	I0708 13:13:43.340549    5462 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19195-1270/.minikube/cache/iso/arm64/minikube-v1.33.1-1720011972-19186-arm64.iso...
	I0708 13:13:43.502155    5462 main.go:141] libmachine: Creating SSH key...
	I0708 13:13:43.605415    5462 main.go:141] libmachine: Creating Disk image...
	I0708 13:13:43.605421    5462 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0708 13:13:43.605631    5462 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/no-preload-172000/disk.qcow2.raw /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/no-preload-172000/disk.qcow2
	I0708 13:13:43.615194    5462 main.go:141] libmachine: STDOUT: 
	I0708 13:13:43.615215    5462 main.go:141] libmachine: STDERR: 
	I0708 13:13:43.615265    5462 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/no-preload-172000/disk.qcow2 +20000M
	I0708 13:13:43.623251    5462 main.go:141] libmachine: STDOUT: Image resized.
	
	I0708 13:13:43.623268    5462 main.go:141] libmachine: STDERR: 
	I0708 13:13:43.623281    5462 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/no-preload-172000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/no-preload-172000/disk.qcow2
	I0708 13:13:43.623285    5462 main.go:141] libmachine: Starting QEMU VM...
	I0708 13:13:43.623331    5462 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/no-preload-172000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/no-preload-172000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/no-preload-172000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:e6:19:28:63:6b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/no-preload-172000/disk.qcow2
	I0708 13:13:43.625198    5462 main.go:141] libmachine: STDOUT: 
	I0708 13:13:43.625216    5462 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 13:13:43.625239    5462 client.go:171] duration metric: took 285.587958ms to LocalClient.Create
	I0708 13:13:45.627331    5462 start.go:128] duration metric: took 2.348801792s to createHost
	I0708 13:13:45.627436    5462 start.go:83] releasing machines lock for "no-preload-172000", held for 2.349346833s
	W0708 13:13:45.627688    5462 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-172000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-172000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:13:45.640103    5462 out.go:177] 
	W0708 13:13:45.644256    5462 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0708 13:13:45.644297    5462 out.go:239] * 
	* 
	W0708 13:13:45.646360    5462 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 13:13:45.655119    5462 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-172000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-172000 -n no-preload-172000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-172000 -n no-preload-172000: exit status 7 (48.988833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-172000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (11.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-604000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-604000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.2: exit status 80 (11.13962s)

                                                
                                                
-- stdout --
	* [embed-certs-604000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-604000" primary control-plane node in "embed-certs-604000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-604000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 13:13:44.241457    5511 out.go:291] Setting OutFile to fd 1 ...
	I0708 13:13:44.241594    5511 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:13:44.241597    5511 out.go:304] Setting ErrFile to fd 2...
	I0708 13:13:44.241599    5511 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:13:44.241717    5511 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 13:13:44.242757    5511 out.go:298] Setting JSON to false
	I0708 13:13:44.258875    5511 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4392,"bootTime":1720465232,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0708 13:13:44.258944    5511 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0708 13:13:44.262730    5511 out.go:177] * [embed-certs-604000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0708 13:13:44.269652    5511 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 13:13:44.269734    5511 notify.go:220] Checking for updates...
	I0708 13:13:44.276607    5511 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 13:13:44.279652    5511 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0708 13:13:44.282666    5511 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 13:13:44.285633    5511 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	I0708 13:13:44.288663    5511 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 13:13:44.291944    5511 config.go:182] Loaded profile config "multinode-969000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 13:13:44.292015    5511 config.go:182] Loaded profile config "no-preload-172000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 13:13:44.292071    5511 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 13:13:44.296591    5511 out.go:177] * Using the qemu2 driver based on user configuration
	I0708 13:13:44.303694    5511 start.go:297] selected driver: qemu2
	I0708 13:13:44.303700    5511 start.go:901] validating driver "qemu2" against <nil>
	I0708 13:13:44.303708    5511 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 13:13:44.305888    5511 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0708 13:13:44.308657    5511 out.go:177] * Automatically selected the socket_vmnet network
	I0708 13:13:44.311738    5511 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 13:13:44.311774    5511 cni.go:84] Creating CNI manager for ""
	I0708 13:13:44.311790    5511 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0708 13:13:44.311796    5511 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0708 13:13:44.311824    5511 start.go:340] cluster config:
	{Name:embed-certs-604000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:embed-certs-604000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 13:13:44.315479    5511 iso.go:125] acquiring lock: {Name:mk0270d312faa6a295feea241390baaf586d8510 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 13:13:44.323577    5511 out.go:177] * Starting "embed-certs-604000" primary control-plane node in "embed-certs-604000" cluster
	I0708 13:13:44.327668    5511 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0708 13:13:44.327684    5511 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0708 13:13:44.327695    5511 cache.go:56] Caching tarball of preloaded images
	I0708 13:13:44.327749    5511 preload.go:173] Found /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0708 13:13:44.327755    5511 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0708 13:13:44.327831    5511 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/embed-certs-604000/config.json ...
	I0708 13:13:44.327842    5511 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/embed-certs-604000/config.json: {Name:mk972aff018b3f07c7774d0baabc26d6b1b1af83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 13:13:44.328058    5511 start.go:360] acquireMachinesLock for embed-certs-604000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 13:13:45.627574    5511 start.go:364] duration metric: took 1.299526958s to acquireMachinesLock for "embed-certs-604000"
	I0708 13:13:45.627747    5511 start.go:93] Provisioning new machine with config: &{Name:embed-certs-604000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:embed-certs-604000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 13:13:45.627926    5511 start.go:125] createHost starting for "" (driver="qemu2")
	I0708 13:13:45.636354    5511 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0708 13:13:45.684217    5511 start.go:159] libmachine.API.Create for "embed-certs-604000" (driver="qemu2")
	I0708 13:13:45.684251    5511 client.go:168] LocalClient.Create starting
	I0708 13:13:45.684362    5511 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem
	I0708 13:13:45.684418    5511 main.go:141] libmachine: Decoding PEM data...
	I0708 13:13:45.684437    5511 main.go:141] libmachine: Parsing certificate...
	I0708 13:13:45.684500    5511 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem
	I0708 13:13:45.684547    5511 main.go:141] libmachine: Decoding PEM data...
	I0708 13:13:45.684564    5511 main.go:141] libmachine: Parsing certificate...
	I0708 13:13:45.685158    5511 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19195-1270/.minikube/cache/iso/arm64/minikube-v1.33.1-1720011972-19186-arm64.iso...
	I0708 13:13:45.844688    5511 main.go:141] libmachine: Creating SSH key...
	I0708 13:13:45.926995    5511 main.go:141] libmachine: Creating Disk image...
	I0708 13:13:45.927007    5511 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0708 13:13:45.927197    5511 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/embed-certs-604000/disk.qcow2.raw /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/embed-certs-604000/disk.qcow2
	I0708 13:13:45.937372    5511 main.go:141] libmachine: STDOUT: 
	I0708 13:13:45.937418    5511 main.go:141] libmachine: STDERR: 
	I0708 13:13:45.937467    5511 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/embed-certs-604000/disk.qcow2 +20000M
	I0708 13:13:45.946478    5511 main.go:141] libmachine: STDOUT: Image resized.
	
	I0708 13:13:45.946502    5511 main.go:141] libmachine: STDERR: 
	I0708 13:13:45.946512    5511 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/embed-certs-604000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/embed-certs-604000/disk.qcow2
	I0708 13:13:45.946518    5511 main.go:141] libmachine: Starting QEMU VM...
	I0708 13:13:45.946550    5511 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/embed-certs-604000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/embed-certs-604000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/embed-certs-604000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:66:c8:2c:d0:49 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/embed-certs-604000/disk.qcow2
	I0708 13:13:45.948331    5511 main.go:141] libmachine: STDOUT: 
	I0708 13:13:45.948347    5511 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 13:13:45.948377    5511 client.go:171] duration metric: took 264.129042ms to LocalClient.Create
	I0708 13:13:47.950503    5511 start.go:128] duration metric: took 2.32262325s to createHost
	I0708 13:13:47.950582    5511 start.go:83] releasing machines lock for "embed-certs-604000", held for 2.323052667s
	W0708 13:13:47.950668    5511 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:13:47.957151    5511 out.go:177] * Deleting "embed-certs-604000" in qemu2 ...
	W0708 13:13:47.987376    5511 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:13:47.987408    5511 start.go:728] Will try again in 5 seconds ...
	I0708 13:13:52.989459    5511 start.go:360] acquireMachinesLock for embed-certs-604000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 13:13:52.989845    5511 start.go:364] duration metric: took 313.333µs to acquireMachinesLock for "embed-certs-604000"
	I0708 13:13:52.989971    5511 start.go:93] Provisioning new machine with config: &{Name:embed-certs-604000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:embed-certs-604000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 13:13:52.990235    5511 start.go:125] createHost starting for "" (driver="qemu2")
	I0708 13:13:52.999875    5511 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0708 13:13:53.049556    5511 start.go:159] libmachine.API.Create for "embed-certs-604000" (driver="qemu2")
	I0708 13:13:53.049627    5511 client.go:168] LocalClient.Create starting
	I0708 13:13:53.049732    5511 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem
	I0708 13:13:53.049795    5511 main.go:141] libmachine: Decoding PEM data...
	I0708 13:13:53.049812    5511 main.go:141] libmachine: Parsing certificate...
	I0708 13:13:53.049873    5511 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem
	I0708 13:13:53.049916    5511 main.go:141] libmachine: Decoding PEM data...
	I0708 13:13:53.049934    5511 main.go:141] libmachine: Parsing certificate...
	I0708 13:13:53.050532    5511 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19195-1270/.minikube/cache/iso/arm64/minikube-v1.33.1-1720011972-19186-arm64.iso...
	I0708 13:13:53.205239    5511 main.go:141] libmachine: Creating SSH key...
	I0708 13:13:53.267111    5511 main.go:141] libmachine: Creating Disk image...
	I0708 13:13:53.267116    5511 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0708 13:13:53.267319    5511 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/embed-certs-604000/disk.qcow2.raw /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/embed-certs-604000/disk.qcow2
	I0708 13:13:53.276553    5511 main.go:141] libmachine: STDOUT: 
	I0708 13:13:53.276574    5511 main.go:141] libmachine: STDERR: 
	I0708 13:13:53.276620    5511 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/embed-certs-604000/disk.qcow2 +20000M
	I0708 13:13:53.284439    5511 main.go:141] libmachine: STDOUT: Image resized.
	
	I0708 13:13:53.284454    5511 main.go:141] libmachine: STDERR: 
	I0708 13:13:53.284464    5511 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/embed-certs-604000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/embed-certs-604000/disk.qcow2
	I0708 13:13:53.284468    5511 main.go:141] libmachine: Starting QEMU VM...
	I0708 13:13:53.284504    5511 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/embed-certs-604000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/embed-certs-604000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/embed-certs-604000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:0e:be:17:9b:f4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/embed-certs-604000/disk.qcow2
	I0708 13:13:53.286120    5511 main.go:141] libmachine: STDOUT: 
	I0708 13:13:53.286137    5511 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 13:13:53.286149    5511 client.go:171] duration metric: took 236.52375ms to LocalClient.Create
	I0708 13:13:55.288323    5511 start.go:128] duration metric: took 2.298061084s to createHost
	I0708 13:13:55.288423    5511 start.go:83] releasing machines lock for "embed-certs-604000", held for 2.298623959s
	W0708 13:13:55.288773    5511 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-604000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-604000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:13:55.300332    5511 out.go:177] 
	W0708 13:13:55.313424    5511 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0708 13:13:55.313469    5511 out.go:239] * 
	* 
	W0708 13:13:55.316380    5511 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 13:13:55.327355    5511 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-604000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-604000 -n embed-certs-604000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-604000 -n embed-certs-604000: exit status 7 (63.502417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-604000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (11.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-172000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-172000 create -f testdata/busybox.yaml: exit status 1 (37.698542ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-172000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-172000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-172000 -n no-preload-172000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-172000 -n no-preload-172000: exit status 7 (33.486208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-172000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-172000 -n no-preload-172000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-172000 -n no-preload-172000: exit status 7 (33.034916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-172000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-172000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-172000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-172000 describe deploy/metrics-server -n kube-system: exit status 1 (27.572125ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-172000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-172000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-172000 -n no-preload-172000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-172000 -n no-preload-172000: exit status 7 (28.421834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-172000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (6.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-172000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-172000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.2: exit status 80 (6.263988583s)

                                                
                                                
-- stdout --
	* [no-preload-172000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-172000" primary control-plane node in "no-preload-172000" cluster
	* Restarting existing qemu2 VM for "no-preload-172000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-172000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 13:13:49.131995    5555 out.go:291] Setting OutFile to fd 1 ...
	I0708 13:13:49.132138    5555 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:13:49.132142    5555 out.go:304] Setting ErrFile to fd 2...
	I0708 13:13:49.132148    5555 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:13:49.132294    5555 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 13:13:49.133302    5555 out.go:298] Setting JSON to false
	I0708 13:13:49.149428    5555 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4397,"bootTime":1720465232,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0708 13:13:49.149501    5555 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0708 13:13:49.154554    5555 out.go:177] * [no-preload-172000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0708 13:13:49.161476    5555 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 13:13:49.161528    5555 notify.go:220] Checking for updates...
	I0708 13:13:49.168490    5555 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 13:13:49.171483    5555 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0708 13:13:49.174528    5555 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 13:13:49.177437    5555 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	I0708 13:13:49.180495    5555 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 13:13:49.183846    5555 config.go:182] Loaded profile config "no-preload-172000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 13:13:49.184106    5555 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 13:13:49.187452    5555 out.go:177] * Using the qemu2 driver based on existing profile
	I0708 13:13:49.194496    5555 start.go:297] selected driver: qemu2
	I0708 13:13:49.194502    5555 start.go:901] validating driver "qemu2" against &{Name:no-preload-172000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:no-preload-172000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 13:13:49.194563    5555 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 13:13:49.196997    5555 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 13:13:49.197034    5555 cni.go:84] Creating CNI manager for ""
	I0708 13:13:49.197042    5555 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0708 13:13:49.197063    5555 start.go:340] cluster config:
	{Name:no-preload-172000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:no-preload-172000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 13:13:49.200675    5555 iso.go:125] acquiring lock: {Name:mk0270d312faa6a295feea241390baaf586d8510 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 13:13:49.208525    5555 out.go:177] * Starting "no-preload-172000" primary control-plane node in "no-preload-172000" cluster
	I0708 13:13:49.212491    5555 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0708 13:13:49.212561    5555 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/no-preload-172000/config.json ...
	I0708 13:13:49.212573    5555 cache.go:107] acquiring lock: {Name:mk48eaa7950e96669e6f1d9da14b3b30130cdc0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 13:13:49.212577    5555 cache.go:107] acquiring lock: {Name:mkae5c5e921428fcb2a41e76cffeb3f2ca669259 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 13:13:49.212607    5555 cache.go:107] acquiring lock: {Name:mk2738b766c65d9ab473cf07ed12b9b8c9309344 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 13:13:49.212629    5555 cache.go:115] /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0708 13:13:49.212634    5555 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 63.917µs
	I0708 13:13:49.212638    5555 cache.go:115] /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.2 exists
	I0708 13:13:49.212641    5555 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0708 13:13:49.212643    5555 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.2" -> "/Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.2" took 74.75µs
	I0708 13:13:49.212647    5555 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.2 -> /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.2 succeeded
	I0708 13:13:49.212648    5555 cache.go:107] acquiring lock: {Name:mkdd58986f390eceea6b2b3f4f0dba958421ad62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 13:13:49.212660    5555 cache.go:107] acquiring lock: {Name:mkcf30ac866c6fd660f58a47b540f2be21e4e364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 13:13:49.212667    5555 cache.go:115] /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.2 exists
	I0708 13:13:49.212674    5555 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.2" -> "/Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.2" took 90.875µs
	I0708 13:13:49.212682    5555 cache.go:115] /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.2 exists
	I0708 13:13:49.212686    5555 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.2" -> "/Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.2" took 39.333µs
	I0708 13:13:49.212690    5555 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.2 -> /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.2 succeeded
	I0708 13:13:49.212682    5555 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.2 -> /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.2 succeeded
	I0708 13:13:49.212698    5555 cache.go:115] /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 exists
	I0708 13:13:49.212688    5555 cache.go:107] acquiring lock: {Name:mk6deaa588e28385ded8f13fc509d78c0ad604d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 13:13:49.212703    5555 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0" took 44.041µs
	I0708 13:13:49.212707    5555 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0708 13:13:49.212718    5555 cache.go:107] acquiring lock: {Name:mkdafd497465fc2943c37f836ad83e8d95caffbe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 13:13:49.212717    5555 cache.go:107] acquiring lock: {Name:mk6cfcb5ec6cbe6a9123165e565261bcd41f23f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 13:13:49.212762    5555 cache.go:115] /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0708 13:13:49.212767    5555 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 79.75µs
	I0708 13:13:49.212771    5555 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0708 13:13:49.212775    5555 cache.go:115] /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0708 13:13:49.212778    5555 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 76.584µs
	I0708 13:13:49.212777    5555 cache.go:115] /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.2 exists
	I0708 13:13:49.212784    5555 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0708 13:13:49.212786    5555 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.2" -> "/Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.2" took 86.125µs
	I0708 13:13:49.212790    5555 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.2 -> /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.2 succeeded
	I0708 13:13:49.212796    5555 cache.go:87] Successfully saved all images to host disk.
	I0708 13:13:49.212945    5555 start.go:360] acquireMachinesLock for no-preload-172000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 13:13:49.212979    5555 start.go:364] duration metric: took 28.416µs to acquireMachinesLock for "no-preload-172000"
	I0708 13:13:49.212987    5555 start.go:96] Skipping create...Using existing machine configuration
	I0708 13:13:49.212993    5555 fix.go:54] fixHost starting: 
	I0708 13:13:49.213122    5555 fix.go:112] recreateIfNeeded on no-preload-172000: state=Stopped err=<nil>
	W0708 13:13:49.213133    5555 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 13:13:49.221560    5555 out.go:177] * Restarting existing qemu2 VM for "no-preload-172000" ...
	I0708 13:13:49.225516    5555 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/no-preload-172000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/no-preload-172000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/no-preload-172000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:e6:19:28:63:6b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/no-preload-172000/disk.qcow2
	I0708 13:13:49.227476    5555 main.go:141] libmachine: STDOUT: 
	I0708 13:13:49.227503    5555 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 13:13:49.227527    5555 fix.go:56] duration metric: took 14.53425ms for fixHost
	I0708 13:13:49.227531    5555 start.go:83] releasing machines lock for "no-preload-172000", held for 14.548584ms
	W0708 13:13:49.227538    5555 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0708 13:13:49.227571    5555 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:13:49.227576    5555 start.go:728] Will try again in 5 seconds ...
	I0708 13:13:54.229587    5555 start.go:360] acquireMachinesLock for no-preload-172000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 13:13:55.288660    5555 start.go:364] duration metric: took 1.058988125s to acquireMachinesLock for "no-preload-172000"
	I0708 13:13:55.288829    5555 start.go:96] Skipping create...Using existing machine configuration
	I0708 13:13:55.288850    5555 fix.go:54] fixHost starting: 
	I0708 13:13:55.289588    5555 fix.go:112] recreateIfNeeded on no-preload-172000: state=Stopped err=<nil>
	W0708 13:13:55.289615    5555 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 13:13:55.309404    5555 out.go:177] * Restarting existing qemu2 VM for "no-preload-172000" ...
	I0708 13:13:55.316489    5555 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/no-preload-172000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/no-preload-172000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/no-preload-172000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:e6:19:28:63:6b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/no-preload-172000/disk.qcow2
	I0708 13:13:55.325632    5555 main.go:141] libmachine: STDOUT: 
	I0708 13:13:55.325694    5555 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 13:13:55.325768    5555 fix.go:56] duration metric: took 36.918292ms for fixHost
	I0708 13:13:55.325785    5555 start.go:83] releasing machines lock for "no-preload-172000", held for 37.091125ms
	W0708 13:13:55.325973    5555 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-172000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-172000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:13:55.338332    5555 out.go:177] 
	W0708 13:13:55.342437    5555 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0708 13:13:55.342478    5555 out.go:239] * 
	* 
	W0708 13:13:55.345040    5555 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 13:13:55.360576    5555 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-172000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-172000 -n no-preload-172000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-172000 -n no-preload-172000: exit status 7 (53.207709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-172000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (6.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-604000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-604000 create -f testdata/busybox.yaml: exit status 1 (31.06675ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-604000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-604000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-604000 -n embed-certs-604000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-604000 -n embed-certs-604000: exit status 7 (30.10675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-604000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-604000 -n embed-certs-604000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-604000 -n embed-certs-604000: exit status 7 (33.77975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-604000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-172000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-172000 -n no-preload-172000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-172000 -n no-preload-172000: exit status 7 (33.522792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-172000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-172000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-172000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-172000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.623958ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-172000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-172000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-172000 -n no-preload-172000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-172000 -n no-preload-172000: exit status 7 (30.517459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-172000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-604000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-604000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-604000 describe deploy/metrics-server -n kube-system: exit status 1 (28.10975ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-604000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-604000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-604000 -n embed-certs-604000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-604000 -n embed-certs-604000: exit status 7 (31.729584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-604000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-172000 image list --format=json
start_stop_delete_test.go:304: v1.30.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.2",
- 	"registry.k8s.io/kube-controller-manager:v1.30.2",
- 	"registry.k8s.io/kube-proxy:v1.30.2",
- 	"registry.k8s.io/kube-scheduler:v1.30.2",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-172000 -n no-preload-172000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-172000 -n no-preload-172000: exit status 7 (30.784875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-172000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-172000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-172000 --alsologtostderr -v=1: exit status 83 (41.837625ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-172000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-172000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 13:13:55.628309    5589 out.go:291] Setting OutFile to fd 1 ...
	I0708 13:13:55.628457    5589 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:13:55.628461    5589 out.go:304] Setting ErrFile to fd 2...
	I0708 13:13:55.628463    5589 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:13:55.628580    5589 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 13:13:55.628803    5589 out.go:298] Setting JSON to false
	I0708 13:13:55.628811    5589 mustload.go:65] Loading cluster: no-preload-172000
	I0708 13:13:55.629009    5589 config.go:182] Loaded profile config "no-preload-172000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 13:13:55.630615    5589 out.go:177] * The control-plane node no-preload-172000 host is not running: state=Stopped
	I0708 13:13:55.634698    5589 out.go:177]   To start a cluster, run: "minikube start -p no-preload-172000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-172000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-172000 -n no-preload-172000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-172000 -n no-preload-172000: exit status 7 (29.000625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-172000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-172000 -n no-preload-172000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-172000 -n no-preload-172000: exit status 7 (27.836292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-172000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-601000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-601000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.2: exit status 80 (10.088470375s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-601000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-601000" primary control-plane node in "default-k8s-diff-port-601000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-601000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 13:13:56.034840    5619 out.go:291] Setting OutFile to fd 1 ...
	I0708 13:13:56.034966    5619 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:13:56.034972    5619 out.go:304] Setting ErrFile to fd 2...
	I0708 13:13:56.034976    5619 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:13:56.035124    5619 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 13:13:56.036327    5619 out.go:298] Setting JSON to false
	I0708 13:13:56.052710    5619 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4404,"bootTime":1720465232,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0708 13:13:56.052774    5619 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0708 13:13:56.056727    5619 out.go:177] * [default-k8s-diff-port-601000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0708 13:13:56.063789    5619 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 13:13:56.063868    5619 notify.go:220] Checking for updates...
	I0708 13:13:56.071774    5619 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 13:13:56.074746    5619 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0708 13:13:56.077736    5619 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 13:13:56.080786    5619 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	I0708 13:13:56.083786    5619 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 13:13:56.087114    5619 config.go:182] Loaded profile config "embed-certs-604000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 13:13:56.087184    5619 config.go:182] Loaded profile config "multinode-969000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 13:13:56.087231    5619 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 13:13:56.091714    5619 out.go:177] * Using the qemu2 driver based on user configuration
	I0708 13:13:56.098681    5619 start.go:297] selected driver: qemu2
	I0708 13:13:56.098694    5619 start.go:901] validating driver "qemu2" against <nil>
	I0708 13:13:56.098699    5619 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 13:13:56.101030    5619 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0708 13:13:56.103743    5619 out.go:177] * Automatically selected the socket_vmnet network
	I0708 13:13:56.105192    5619 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 13:13:56.105214    5619 cni.go:84] Creating CNI manager for ""
	I0708 13:13:56.105230    5619 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0708 13:13:56.105234    5619 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0708 13:13:56.105271    5619 start.go:340] cluster config:
	{Name:default-k8s-diff-port-601000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-601000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 13:13:56.108935    5619 iso.go:125] acquiring lock: {Name:mk0270d312faa6a295feea241390baaf586d8510 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 13:13:56.115740    5619 out.go:177] * Starting "default-k8s-diff-port-601000" primary control-plane node in "default-k8s-diff-port-601000" cluster
	I0708 13:13:56.119689    5619 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0708 13:13:56.119705    5619 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0708 13:13:56.119712    5619 cache.go:56] Caching tarball of preloaded images
	I0708 13:13:56.119773    5619 preload.go:173] Found /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0708 13:13:56.119779    5619 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0708 13:13:56.119834    5619 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/default-k8s-diff-port-601000/config.json ...
	I0708 13:13:56.119845    5619 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/default-k8s-diff-port-601000/config.json: {Name:mk6945ea7ef9e26108eae7215a5ddd35d6c57621 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 13:13:56.120179    5619 start.go:360] acquireMachinesLock for default-k8s-diff-port-601000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 13:13:56.120215    5619 start.go:364] duration metric: took 28.958µs to acquireMachinesLock for "default-k8s-diff-port-601000"
	I0708 13:13:56.120227    5619 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-601000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-601000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 13:13:56.120253    5619 start.go:125] createHost starting for "" (driver="qemu2")
	I0708 13:13:56.128655    5619 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0708 13:13:56.146320    5619 start.go:159] libmachine.API.Create for "default-k8s-diff-port-601000" (driver="qemu2")
	I0708 13:13:56.146354    5619 client.go:168] LocalClient.Create starting
	I0708 13:13:56.146417    5619 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem
	I0708 13:13:56.146449    5619 main.go:141] libmachine: Decoding PEM data...
	I0708 13:13:56.146458    5619 main.go:141] libmachine: Parsing certificate...
	I0708 13:13:56.146498    5619 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem
	I0708 13:13:56.146522    5619 main.go:141] libmachine: Decoding PEM data...
	I0708 13:13:56.146528    5619 main.go:141] libmachine: Parsing certificate...
	I0708 13:13:56.146883    5619 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19195-1270/.minikube/cache/iso/arm64/minikube-v1.33.1-1720011972-19186-arm64.iso...
	I0708 13:13:56.292323    5619 main.go:141] libmachine: Creating SSH key...
	I0708 13:13:56.484905    5619 main.go:141] libmachine: Creating Disk image...
	I0708 13:13:56.484911    5619 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0708 13:13:56.485099    5619 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/default-k8s-diff-port-601000/disk.qcow2.raw /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/default-k8s-diff-port-601000/disk.qcow2
	I0708 13:13:56.494923    5619 main.go:141] libmachine: STDOUT: 
	I0708 13:13:56.494945    5619 main.go:141] libmachine: STDERR: 
	I0708 13:13:56.495002    5619 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/default-k8s-diff-port-601000/disk.qcow2 +20000M
	I0708 13:13:56.503057    5619 main.go:141] libmachine: STDOUT: Image resized.
	
	I0708 13:13:56.503070    5619 main.go:141] libmachine: STDERR: 
	I0708 13:13:56.503083    5619 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/default-k8s-diff-port-601000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/default-k8s-diff-port-601000/disk.qcow2
	I0708 13:13:56.503088    5619 main.go:141] libmachine: Starting QEMU VM...
	I0708 13:13:56.503120    5619 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/default-k8s-diff-port-601000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/default-k8s-diff-port-601000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/default-k8s-diff-port-601000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:8b:5a:f0:af:b7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/default-k8s-diff-port-601000/disk.qcow2
	I0708 13:13:56.504785    5619 main.go:141] libmachine: STDOUT: 
	I0708 13:13:56.504800    5619 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 13:13:56.504819    5619 client.go:171] duration metric: took 358.471958ms to LocalClient.Create
	I0708 13:13:58.506989    5619 start.go:128] duration metric: took 2.386789083s to createHost
	I0708 13:13:58.507054    5619 start.go:83] releasing machines lock for "default-k8s-diff-port-601000", held for 2.386907917s
	W0708 13:13:58.507142    5619 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:13:58.518491    5619 out.go:177] * Deleting "default-k8s-diff-port-601000" in qemu2 ...
	W0708 13:13:58.553549    5619 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:13:58.553574    5619 start.go:728] Will try again in 5 seconds ...
	I0708 13:14:03.555646    5619 start.go:360] acquireMachinesLock for default-k8s-diff-port-601000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 13:14:03.556121    5619 start.go:364] duration metric: took 357.5µs to acquireMachinesLock for "default-k8s-diff-port-601000"
	I0708 13:14:03.556260    5619 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-601000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-601000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 13:14:03.556516    5619 start.go:125] createHost starting for "" (driver="qemu2")
	I0708 13:14:03.561995    5619 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0708 13:14:03.615336    5619 start.go:159] libmachine.API.Create for "default-k8s-diff-port-601000" (driver="qemu2")
	I0708 13:14:03.615393    5619 client.go:168] LocalClient.Create starting
	I0708 13:14:03.615491    5619 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem
	I0708 13:14:03.615557    5619 main.go:141] libmachine: Decoding PEM data...
	I0708 13:14:03.615571    5619 main.go:141] libmachine: Parsing certificate...
	I0708 13:14:03.615631    5619 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem
	I0708 13:14:03.615672    5619 main.go:141] libmachine: Decoding PEM data...
	I0708 13:14:03.615683    5619 main.go:141] libmachine: Parsing certificate...
	I0708 13:14:03.616257    5619 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19195-1270/.minikube/cache/iso/arm64/minikube-v1.33.1-1720011972-19186-arm64.iso...
	I0708 13:14:03.773134    5619 main.go:141] libmachine: Creating SSH key...
	I0708 13:14:04.020992    5619 main.go:141] libmachine: Creating Disk image...
	I0708 13:14:04.021002    5619 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0708 13:14:04.021178    5619 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/default-k8s-diff-port-601000/disk.qcow2.raw /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/default-k8s-diff-port-601000/disk.qcow2
	I0708 13:14:04.030726    5619 main.go:141] libmachine: STDOUT: 
	I0708 13:14:04.030754    5619 main.go:141] libmachine: STDERR: 
	I0708 13:14:04.030819    5619 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/default-k8s-diff-port-601000/disk.qcow2 +20000M
	I0708 13:14:04.038836    5619 main.go:141] libmachine: STDOUT: Image resized.
	
	I0708 13:14:04.038849    5619 main.go:141] libmachine: STDERR: 
	I0708 13:14:04.038869    5619 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/default-k8s-diff-port-601000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/default-k8s-diff-port-601000/disk.qcow2
	I0708 13:14:04.038876    5619 main.go:141] libmachine: Starting QEMU VM...
	I0708 13:14:04.038915    5619 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/default-k8s-diff-port-601000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/default-k8s-diff-port-601000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/default-k8s-diff-port-601000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:15:34:c4:cb:fa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/default-k8s-diff-port-601000/disk.qcow2
	I0708 13:14:04.040528    5619 main.go:141] libmachine: STDOUT: 
	I0708 13:14:04.040541    5619 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 13:14:04.040558    5619 client.go:171] duration metric: took 425.172958ms to LocalClient.Create
	I0708 13:14:06.040873    5619 start.go:128] duration metric: took 2.484378584s to createHost
	I0708 13:14:06.040966    5619 start.go:83] releasing machines lock for "default-k8s-diff-port-601000", held for 2.484902291s
	W0708 13:14:06.041300    5619 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-601000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-601000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:14:06.050008    5619 out.go:177] 
	W0708 13:14:06.063120    5619 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0708 13:14:06.063164    5619 out.go:239] * 
	* 
	W0708 13:14:06.066089    5619 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 13:14:06.078960    5619 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-601000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-601000 -n default-k8s-diff-port-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-601000 -n default-k8s-diff-port-601000: exit status 7 (63.539ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-601000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (6.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-604000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-604000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.2: exit status 80 (6.515951792s)

                                                
                                                
-- stdout --
	* [embed-certs-604000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-604000" primary control-plane node in "embed-certs-604000" cluster
	* Restarting existing qemu2 VM for "embed-certs-604000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-604000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 13:13:59.629399    5648 out.go:291] Setting OutFile to fd 1 ...
	I0708 13:13:59.629529    5648 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:13:59.629533    5648 out.go:304] Setting ErrFile to fd 2...
	I0708 13:13:59.629535    5648 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:13:59.629669    5648 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 13:13:59.630692    5648 out.go:298] Setting JSON to false
	I0708 13:13:59.647026    5648 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4407,"bootTime":1720465232,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0708 13:13:59.647105    5648 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0708 13:13:59.651313    5648 out.go:177] * [embed-certs-604000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0708 13:13:59.658306    5648 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 13:13:59.658379    5648 notify.go:220] Checking for updates...
	I0708 13:13:59.666273    5648 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 13:13:59.669346    5648 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0708 13:13:59.672306    5648 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 13:13:59.675283    5648 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	I0708 13:13:59.678312    5648 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 13:13:59.681467    5648 config.go:182] Loaded profile config "embed-certs-604000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 13:13:59.681769    5648 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 13:13:59.686246    5648 out.go:177] * Using the qemu2 driver based on existing profile
	I0708 13:13:59.693295    5648 start.go:297] selected driver: qemu2
	I0708 13:13:59.693301    5648 start.go:901] validating driver "qemu2" against &{Name:embed-certs-604000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:embed-certs-604000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 13:13:59.693374    5648 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 13:13:59.695688    5648 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 13:13:59.695713    5648 cni.go:84] Creating CNI manager for ""
	I0708 13:13:59.695722    5648 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0708 13:13:59.695757    5648 start.go:340] cluster config:
	{Name:embed-certs-604000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:embed-certs-604000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 13:13:59.699480    5648 iso.go:125] acquiring lock: {Name:mk0270d312faa6a295feea241390baaf586d8510 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 13:13:59.707299    5648 out.go:177] * Starting "embed-certs-604000" primary control-plane node in "embed-certs-604000" cluster
	I0708 13:13:59.711236    5648 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0708 13:13:59.711249    5648 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0708 13:13:59.711257    5648 cache.go:56] Caching tarball of preloaded images
	I0708 13:13:59.711313    5648 preload.go:173] Found /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0708 13:13:59.711318    5648 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0708 13:13:59.711367    5648 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/embed-certs-604000/config.json ...
	I0708 13:13:59.711822    5648 start.go:360] acquireMachinesLock for embed-certs-604000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 13:13:59.711856    5648 start.go:364] duration metric: took 28.292µs to acquireMachinesLock for "embed-certs-604000"
	I0708 13:13:59.711865    5648 start.go:96] Skipping create...Using existing machine configuration
	I0708 13:13:59.711871    5648 fix.go:54] fixHost starting: 
	I0708 13:13:59.711993    5648 fix.go:112] recreateIfNeeded on embed-certs-604000: state=Stopped err=<nil>
	W0708 13:13:59.712002    5648 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 13:13:59.715338    5648 out.go:177] * Restarting existing qemu2 VM for "embed-certs-604000" ...
	I0708 13:13:59.723378    5648 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/embed-certs-604000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/embed-certs-604000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/embed-certs-604000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:0e:be:17:9b:f4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/embed-certs-604000/disk.qcow2
	I0708 13:13:59.725675    5648 main.go:141] libmachine: STDOUT: 
	I0708 13:13:59.725698    5648 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 13:13:59.725728    5648 fix.go:56] duration metric: took 13.858541ms for fixHost
	I0708 13:13:59.725733    5648 start.go:83] releasing machines lock for "embed-certs-604000", held for 13.872417ms
	W0708 13:13:59.725740    5648 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0708 13:13:59.725779    5648 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:13:59.725784    5648 start.go:728] Will try again in 5 seconds ...
	I0708 13:14:04.727848    5648 start.go:360] acquireMachinesLock for embed-certs-604000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 13:14:06.041167    5648 start.go:364] duration metric: took 1.313179583s to acquireMachinesLock for "embed-certs-604000"
	I0708 13:14:06.041357    5648 start.go:96] Skipping create...Using existing machine configuration
	I0708 13:14:06.041373    5648 fix.go:54] fixHost starting: 
	I0708 13:14:06.042206    5648 fix.go:112] recreateIfNeeded on embed-certs-604000: state=Stopped err=<nil>
	W0708 13:14:06.042236    5648 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 13:14:06.058950    5648 out.go:177] * Restarting existing qemu2 VM for "embed-certs-604000" ...
	I0708 13:14:06.067019    5648 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/embed-certs-604000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/embed-certs-604000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/embed-certs-604000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:0e:be:17:9b:f4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/embed-certs-604000/disk.qcow2
	I0708 13:14:06.076238    5648 main.go:141] libmachine: STDOUT: 
	I0708 13:14:06.076305    5648 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 13:14:06.076391    5648 fix.go:56] duration metric: took 35.020041ms for fixHost
	I0708 13:14:06.076449    5648 start.go:83] releasing machines lock for "embed-certs-604000", held for 35.180709ms
	W0708 13:14:06.076672    5648 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-604000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-604000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:14:06.090942    5648 out.go:177] 
	W0708 13:14:06.093974    5648 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0708 13:14:06.094006    5648 out.go:239] * 
	* 
	W0708 13:14:06.096645    5648 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 13:14:06.104935    5648 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-604000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-604000 -n embed-certs-604000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-604000 -n embed-certs-604000: exit status 7 (50.013041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-604000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (6.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-601000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-601000 create -f testdata/busybox.yaml: exit status 1 (31.399625ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-601000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-601000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-601000 -n default-k8s-diff-port-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-601000 -n default-k8s-diff-port-601000: exit status 7 (29.624542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-601000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-601000 -n default-k8s-diff-port-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-601000 -n default-k8s-diff-port-601000: exit status 7 (33.158834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-601000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-604000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-604000 -n embed-certs-604000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-604000 -n embed-certs-604000: exit status 7 (33.237666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-604000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-604000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-604000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-604000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (30.013166ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-604000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-604000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-604000 -n embed-certs-604000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-604000 -n embed-certs-604000: exit status 7 (30.21875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-604000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-601000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-601000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-601000 describe deploy/metrics-server -n kube-system: exit status 1 (28.48525ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-601000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-601000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-601000 -n default-k8s-diff-port-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-601000 -n default-k8s-diff-port-601000: exit status 7 (33.7175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-601000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-604000 image list --format=json
start_stop_delete_test.go:304: v1.30.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.2",
- 	"registry.k8s.io/kube-controller-manager:v1.30.2",
- 	"registry.k8s.io/kube-proxy:v1.30.2",
- 	"registry.k8s.io/kube-scheduler:v1.30.2",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-604000 -n embed-certs-604000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-604000 -n embed-certs-604000: exit status 7 (29.663083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-604000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-604000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-604000 --alsologtostderr -v=1: exit status 83 (47.844666ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-604000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-604000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 13:14:06.370442    5683 out.go:291] Setting OutFile to fd 1 ...
	I0708 13:14:06.370585    5683 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:14:06.370589    5683 out.go:304] Setting ErrFile to fd 2...
	I0708 13:14:06.370591    5683 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:14:06.370735    5683 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 13:14:06.370948    5683 out.go:298] Setting JSON to false
	I0708 13:14:06.370955    5683 mustload.go:65] Loading cluster: embed-certs-604000
	I0708 13:14:06.371134    5683 config.go:182] Loaded profile config "embed-certs-604000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 13:14:06.375958    5683 out.go:177] * The control-plane node embed-certs-604000 host is not running: state=Stopped
	I0708 13:14:06.381899    5683 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-604000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-604000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-604000 -n embed-certs-604000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-604000 -n embed-certs-604000: exit status 7 (29.723833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-604000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-604000 -n embed-certs-604000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-604000 -n embed-certs-604000: exit status 7 (27.783208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-604000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-812000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-812000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.2: exit status 80 (10.012151625s)

                                                
                                                
-- stdout --
	* [newest-cni-812000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-812000" primary control-plane node in "newest-cni-812000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-812000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 13:14:06.674837    5706 out.go:291] Setting OutFile to fd 1 ...
	I0708 13:14:06.674958    5706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:14:06.674961    5706 out.go:304] Setting ErrFile to fd 2...
	I0708 13:14:06.674964    5706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:14:06.675090    5706 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 13:14:06.676059    5706 out.go:298] Setting JSON to false
	I0708 13:14:06.692496    5706 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4414,"bootTime":1720465232,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0708 13:14:06.692569    5706 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0708 13:14:06.696957    5706 out.go:177] * [newest-cni-812000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0708 13:14:06.703901    5706 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 13:14:06.703948    5706 notify.go:220] Checking for updates...
	I0708 13:14:06.711880    5706 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 13:14:06.715001    5706 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0708 13:14:06.717887    5706 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 13:14:06.720959    5706 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	I0708 13:14:06.723894    5706 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 13:14:06.727263    5706 config.go:182] Loaded profile config "default-k8s-diff-port-601000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 13:14:06.727323    5706 config.go:182] Loaded profile config "multinode-969000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 13:14:06.727365    5706 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 13:14:06.731922    5706 out.go:177] * Using the qemu2 driver based on user configuration
	I0708 13:14:06.738933    5706 start.go:297] selected driver: qemu2
	I0708 13:14:06.738939    5706 start.go:901] validating driver "qemu2" against <nil>
	I0708 13:14:06.738948    5706 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 13:14:06.741176    5706 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0708 13:14:06.741197    5706 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0708 13:14:06.748865    5706 out.go:177] * Automatically selected the socket_vmnet network
	I0708 13:14:06.752014    5706 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0708 13:14:06.752039    5706 cni.go:84] Creating CNI manager for ""
	I0708 13:14:06.752054    5706 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0708 13:14:06.752063    5706 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0708 13:14:06.752093    5706 start.go:340] cluster config:
	{Name:newest-cni-812000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:newest-cni-812000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 13:14:06.755817    5706 iso.go:125] acquiring lock: {Name:mk0270d312faa6a295feea241390baaf586d8510 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 13:14:06.761916    5706 out.go:177] * Starting "newest-cni-812000" primary control-plane node in "newest-cni-812000" cluster
	I0708 13:14:06.765932    5706 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0708 13:14:06.765948    5706 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0708 13:14:06.765957    5706 cache.go:56] Caching tarball of preloaded images
	I0708 13:14:06.766016    5706 preload.go:173] Found /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0708 13:14:06.766022    5706 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0708 13:14:06.766129    5706 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/newest-cni-812000/config.json ...
	I0708 13:14:06.766140    5706 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/newest-cni-812000/config.json: {Name:mk78a67fb4d6740d6b7d69718b5d13d114661e77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 13:14:06.766485    5706 start.go:360] acquireMachinesLock for newest-cni-812000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 13:14:06.766520    5706 start.go:364] duration metric: took 29.167µs to acquireMachinesLock for "newest-cni-812000"
	I0708 13:14:06.766531    5706 start.go:93] Provisioning new machine with config: &{Name:newest-cni-812000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:newest-cni-812000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 13:14:06.766558    5706 start.go:125] createHost starting for "" (driver="qemu2")
	I0708 13:14:06.774898    5706 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0708 13:14:06.793190    5706 start.go:159] libmachine.API.Create for "newest-cni-812000" (driver="qemu2")
	I0708 13:14:06.793219    5706 client.go:168] LocalClient.Create starting
	I0708 13:14:06.793293    5706 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem
	I0708 13:14:06.793324    5706 main.go:141] libmachine: Decoding PEM data...
	I0708 13:14:06.793337    5706 main.go:141] libmachine: Parsing certificate...
	I0708 13:14:06.793375    5706 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem
	I0708 13:14:06.793399    5706 main.go:141] libmachine: Decoding PEM data...
	I0708 13:14:06.793407    5706 main.go:141] libmachine: Parsing certificate...
	I0708 13:14:06.793835    5706 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19195-1270/.minikube/cache/iso/arm64/minikube-v1.33.1-1720011972-19186-arm64.iso...
	I0708 13:14:06.974275    5706 main.go:141] libmachine: Creating SSH key...
	I0708 13:14:07.093134    5706 main.go:141] libmachine: Creating Disk image...
	I0708 13:14:07.093143    5706 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0708 13:14:07.093317    5706 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/newest-cni-812000/disk.qcow2.raw /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/newest-cni-812000/disk.qcow2
	I0708 13:14:07.102836    5706 main.go:141] libmachine: STDOUT: 
	I0708 13:14:07.102856    5706 main.go:141] libmachine: STDERR: 
	I0708 13:14:07.102905    5706 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/newest-cni-812000/disk.qcow2 +20000M
	I0708 13:14:07.110727    5706 main.go:141] libmachine: STDOUT: Image resized.
	
	I0708 13:14:07.110741    5706 main.go:141] libmachine: STDERR: 
	I0708 13:14:07.110753    5706 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/newest-cni-812000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/newest-cni-812000/disk.qcow2
	I0708 13:14:07.110756    5706 main.go:141] libmachine: Starting QEMU VM...
	I0708 13:14:07.110791    5706 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/newest-cni-812000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/newest-cni-812000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/newest-cni-812000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:62:53:4a:98:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/newest-cni-812000/disk.qcow2
	I0708 13:14:07.112473    5706 main.go:141] libmachine: STDOUT: 
	I0708 13:14:07.112486    5706 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 13:14:07.112513    5706 client.go:171] duration metric: took 319.287833ms to LocalClient.Create
	I0708 13:14:09.114634    5706 start.go:128] duration metric: took 2.3481335s to createHost
	I0708 13:14:09.114688    5706 start.go:83] releasing machines lock for "newest-cni-812000", held for 2.348235458s
	W0708 13:14:09.114752    5706 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:14:09.122108    5706 out.go:177] * Deleting "newest-cni-812000" in qemu2 ...
	W0708 13:14:09.150072    5706 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:14:09.150107    5706 start.go:728] Will try again in 5 seconds ...
	I0708 13:14:14.152259    5706 start.go:360] acquireMachinesLock for newest-cni-812000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 13:14:14.152796    5706 start.go:364] duration metric: took 393.334µs to acquireMachinesLock for "newest-cni-812000"
	I0708 13:14:14.152976    5706 start.go:93] Provisioning new machine with config: &{Name:newest-cni-812000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:newest-cni-812000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0708 13:14:14.153207    5706 start.go:125] createHost starting for "" (driver="qemu2")
	I0708 13:14:14.162801    5706 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0708 13:14:14.216613    5706 start.go:159] libmachine.API.Create for "newest-cni-812000" (driver="qemu2")
	I0708 13:14:14.216667    5706 client.go:168] LocalClient.Create starting
	I0708 13:14:14.216784    5706 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/ca.pem
	I0708 13:14:14.216841    5706 main.go:141] libmachine: Decoding PEM data...
	I0708 13:14:14.216858    5706 main.go:141] libmachine: Parsing certificate...
	I0708 13:14:14.216928    5706 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19195-1270/.minikube/certs/cert.pem
	I0708 13:14:14.216973    5706 main.go:141] libmachine: Decoding PEM data...
	I0708 13:14:14.216989    5706 main.go:141] libmachine: Parsing certificate...
	I0708 13:14:14.217589    5706 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19195-1270/.minikube/cache/iso/arm64/minikube-v1.33.1-1720011972-19186-arm64.iso...
	I0708 13:14:14.373513    5706 main.go:141] libmachine: Creating SSH key...
	I0708 13:14:14.572921    5706 main.go:141] libmachine: Creating Disk image...
	I0708 13:14:14.572927    5706 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0708 13:14:14.573148    5706 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/newest-cni-812000/disk.qcow2.raw /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/newest-cni-812000/disk.qcow2
	I0708 13:14:14.582873    5706 main.go:141] libmachine: STDOUT: 
	I0708 13:14:14.582892    5706 main.go:141] libmachine: STDERR: 
	I0708 13:14:14.582932    5706 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/newest-cni-812000/disk.qcow2 +20000M
	I0708 13:14:14.590897    5706 main.go:141] libmachine: STDOUT: Image resized.
	
	I0708 13:14:14.590921    5706 main.go:141] libmachine: STDERR: 
	I0708 13:14:14.590933    5706 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/newest-cni-812000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/newest-cni-812000/disk.qcow2
	I0708 13:14:14.590939    5706 main.go:141] libmachine: Starting QEMU VM...
	I0708 13:14:14.590980    5706 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/newest-cni-812000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/newest-cni-812000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/newest-cni-812000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:c6:e5:6d:05:f8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/newest-cni-812000/disk.qcow2
	I0708 13:14:14.592671    5706 main.go:141] libmachine: STDOUT: 
	I0708 13:14:14.592695    5706 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 13:14:14.592708    5706 client.go:171] duration metric: took 376.047375ms to LocalClient.Create
	I0708 13:14:16.594811    5706 start.go:128] duration metric: took 2.44165675s to createHost
	I0708 13:14:16.594872    5706 start.go:83] releasing machines lock for "newest-cni-812000", held for 2.442126166s
	W0708 13:14:16.595255    5706 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-812000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-812000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:14:16.611876    5706 out.go:177] 
	W0708 13:14:16.617968    5706 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0708 13:14:16.617994    5706 out.go:239] * 
	* 
	W0708 13:14:16.620804    5706 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 13:14:16.633919    5706 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-812000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-812000 -n newest-cni-812000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-812000 -n newest-cni-812000: exit status 7 (70.614458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-812000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (7.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-601000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-601000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.2: exit status 80 (7.03863875s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-601000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-601000" primary control-plane node in "default-k8s-diff-port-601000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-601000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-601000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 13:14:09.660995    5734 out.go:291] Setting OutFile to fd 1 ...
	I0708 13:14:09.661123    5734 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:14:09.661127    5734 out.go:304] Setting ErrFile to fd 2...
	I0708 13:14:09.661130    5734 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:14:09.661270    5734 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 13:14:09.662275    5734 out.go:298] Setting JSON to false
	I0708 13:14:09.678089    5734 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4417,"bootTime":1720465232,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0708 13:14:09.678157    5734 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0708 13:14:09.682542    5734 out.go:177] * [default-k8s-diff-port-601000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0708 13:14:09.690477    5734 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 13:14:09.690537    5734 notify.go:220] Checking for updates...
	I0708 13:14:09.697556    5734 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 13:14:09.700428    5734 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0708 13:14:09.703445    5734 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 13:14:09.706350    5734 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	I0708 13:14:09.709409    5734 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 13:14:09.712777    5734 config.go:182] Loaded profile config "default-k8s-diff-port-601000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 13:14:09.713043    5734 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 13:14:09.717364    5734 out.go:177] * Using the qemu2 driver based on existing profile
	I0708 13:14:09.724452    5734 start.go:297] selected driver: qemu2
	I0708 13:14:09.724459    5734 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-601000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-601000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 13:14:09.724532    5734 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 13:14:09.726721    5734 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 13:14:09.726762    5734 cni.go:84] Creating CNI manager for ""
	I0708 13:14:09.726771    5734 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0708 13:14:09.726793    5734 start.go:340] cluster config:
	{Name:default-k8s-diff-port-601000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-601000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 13:14:09.730375    5734 iso.go:125] acquiring lock: {Name:mk0270d312faa6a295feea241390baaf586d8510 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 13:14:09.739351    5734 out.go:177] * Starting "default-k8s-diff-port-601000" primary control-plane node in "default-k8s-diff-port-601000" cluster
	I0708 13:14:09.743398    5734 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0708 13:14:09.743416    5734 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0708 13:14:09.743427    5734 cache.go:56] Caching tarball of preloaded images
	I0708 13:14:09.743506    5734 preload.go:173] Found /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0708 13:14:09.743517    5734 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0708 13:14:09.743577    5734 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/default-k8s-diff-port-601000/config.json ...
	I0708 13:14:09.744035    5734 start.go:360] acquireMachinesLock for default-k8s-diff-port-601000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 13:14:09.744071    5734 start.go:364] duration metric: took 29.208µs to acquireMachinesLock for "default-k8s-diff-port-601000"
	I0708 13:14:09.744080    5734 start.go:96] Skipping create...Using existing machine configuration
	I0708 13:14:09.744088    5734 fix.go:54] fixHost starting: 
	I0708 13:14:09.744212    5734 fix.go:112] recreateIfNeeded on default-k8s-diff-port-601000: state=Stopped err=<nil>
	W0708 13:14:09.744221    5734 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 13:14:09.748432    5734 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-601000" ...
	I0708 13:14:09.756465    5734 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/default-k8s-diff-port-601000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/default-k8s-diff-port-601000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/default-k8s-diff-port-601000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:15:34:c4:cb:fa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/default-k8s-diff-port-601000/disk.qcow2
	I0708 13:14:09.758532    5734 main.go:141] libmachine: STDOUT: 
	I0708 13:14:09.758553    5734 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 13:14:09.758582    5734 fix.go:56] duration metric: took 14.494917ms for fixHost
	I0708 13:14:09.758587    5734 start.go:83] releasing machines lock for "default-k8s-diff-port-601000", held for 14.512584ms
	W0708 13:14:09.758593    5734 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0708 13:14:09.758625    5734 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:14:09.758630    5734 start.go:728] Will try again in 5 seconds ...
	I0708 13:14:14.760659    5734 start.go:360] acquireMachinesLock for default-k8s-diff-port-601000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 13:14:16.595060    5734 start.go:364] duration metric: took 1.834343875s to acquireMachinesLock for "default-k8s-diff-port-601000"
	I0708 13:14:16.595265    5734 start.go:96] Skipping create...Using existing machine configuration
	I0708 13:14:16.595286    5734 fix.go:54] fixHost starting: 
	I0708 13:14:16.596027    5734 fix.go:112] recreateIfNeeded on default-k8s-diff-port-601000: state=Stopped err=<nil>
	W0708 13:14:16.596057    5734 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 13:14:16.614844    5734 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-601000" ...
	I0708 13:14:16.622144    5734 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/default-k8s-diff-port-601000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/default-k8s-diff-port-601000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/default-k8s-diff-port-601000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:15:34:c4:cb:fa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/default-k8s-diff-port-601000/disk.qcow2
	I0708 13:14:16.632192    5734 main.go:141] libmachine: STDOUT: 
	I0708 13:14:16.632274    5734 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 13:14:16.632385    5734 fix.go:56] duration metric: took 37.1045ms for fixHost
	I0708 13:14:16.632409    5734 start.go:83] releasing machines lock for "default-k8s-diff-port-601000", held for 37.308292ms
	W0708 13:14:16.632637    5734 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-601000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-601000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:14:16.646056    5734 out.go:177] 
	W0708 13:14:16.650042    5734 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0708 13:14:16.650072    5734 out.go:239] * 
	* 
	W0708 13:14:16.652094    5734 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 13:14:16.660818    5734 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-601000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-601000 -n default-k8s-diff-port-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-601000 -n default-k8s-diff-port-601000: exit status 7 (60.8645ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-601000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (7.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-601000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-601000 -n default-k8s-diff-port-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-601000 -n default-k8s-diff-port-601000: exit status 7 (35.044875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-601000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-601000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-601000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-601000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.020209ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-601000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-601000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-601000 -n default-k8s-diff-port-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-601000 -n default-k8s-diff-port-601000: exit status 7 (31.661333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-601000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-601000 image list --format=json
start_stop_delete_test.go:304: v1.30.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.2",
- 	"registry.k8s.io/kube-controller-manager:v1.30.2",
- 	"registry.k8s.io/kube-proxy:v1.30.2",
- 	"registry.k8s.io/kube-scheduler:v1.30.2",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-601000 -n default-k8s-diff-port-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-601000 -n default-k8s-diff-port-601000: exit status 7 (28.675917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-601000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-601000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-601000 --alsologtostderr -v=1: exit status 83 (43.91625ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-601000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-601000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 13:14:16.920744    5765 out.go:291] Setting OutFile to fd 1 ...
	I0708 13:14:16.920901    5765 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:14:16.920904    5765 out.go:304] Setting ErrFile to fd 2...
	I0708 13:14:16.920907    5765 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:14:16.921037    5765 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 13:14:16.921237    5765 out.go:298] Setting JSON to false
	I0708 13:14:16.921246    5765 mustload.go:65] Loading cluster: default-k8s-diff-port-601000
	I0708 13:14:16.921434    5765 config.go:182] Loaded profile config "default-k8s-diff-port-601000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 13:14:16.925992    5765 out.go:177] * The control-plane node default-k8s-diff-port-601000 host is not running: state=Stopped
	I0708 13:14:16.932847    5765 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-601000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-601000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-601000 -n default-k8s-diff-port-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-601000 -n default-k8s-diff-port-601000: exit status 7 (28.726459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-601000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-601000 -n default-k8s-diff-port-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-601000 -n default-k8s-diff-port-601000: exit status 7 (29.140334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-601000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-812000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-812000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.2: exit status 80 (5.183881125s)

                                                
                                                
-- stdout --
	* [newest-cni-812000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-812000" primary control-plane node in "newest-cni-812000" cluster
	* Restarting existing qemu2 VM for "newest-cni-812000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-812000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 13:14:20.370281    5802 out.go:291] Setting OutFile to fd 1 ...
	I0708 13:14:20.370426    5802 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:14:20.370429    5802 out.go:304] Setting ErrFile to fd 2...
	I0708 13:14:20.370432    5802 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:14:20.370584    5802 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 13:14:20.371657    5802 out.go:298] Setting JSON to false
	I0708 13:14:20.387660    5802 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4428,"bootTime":1720465232,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0708 13:14:20.387726    5802 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0708 13:14:20.391195    5802 out.go:177] * [newest-cni-812000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0708 13:14:20.399263    5802 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 13:14:20.399318    5802 notify.go:220] Checking for updates...
	I0708 13:14:20.406154    5802 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 13:14:20.409164    5802 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0708 13:14:20.412146    5802 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 13:14:20.415155    5802 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	I0708 13:14:20.418173    5802 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 13:14:20.421504    5802 config.go:182] Loaded profile config "newest-cni-812000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 13:14:20.421770    5802 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 13:14:20.426112    5802 out.go:177] * Using the qemu2 driver based on existing profile
	I0708 13:14:20.433120    5802 start.go:297] selected driver: qemu2
	I0708 13:14:20.433125    5802 start.go:901] validating driver "qemu2" against &{Name:newest-cni-812000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:newest-cni-812000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 13:14:20.433168    5802 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 13:14:20.435377    5802 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0708 13:14:20.435420    5802 cni.go:84] Creating CNI manager for ""
	I0708 13:14:20.435429    5802 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0708 13:14:20.435452    5802 start.go:340] cluster config:
	{Name:newest-cni-812000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:newest-cni-812000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 13:14:20.438866    5802 iso.go:125] acquiring lock: {Name:mk0270d312faa6a295feea241390baaf586d8510 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 13:14:20.446127    5802 out.go:177] * Starting "newest-cni-812000" primary control-plane node in "newest-cni-812000" cluster
	I0708 13:14:20.450233    5802 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0708 13:14:20.450248    5802 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0708 13:14:20.450262    5802 cache.go:56] Caching tarball of preloaded images
	I0708 13:14:20.450330    5802 preload.go:173] Found /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0708 13:14:20.450335    5802 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0708 13:14:20.450412    5802 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/newest-cni-812000/config.json ...
	I0708 13:14:20.450766    5802 start.go:360] acquireMachinesLock for newest-cni-812000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 13:14:20.450806    5802 start.go:364] duration metric: took 33.833µs to acquireMachinesLock for "newest-cni-812000"
	I0708 13:14:20.450815    5802 start.go:96] Skipping create...Using existing machine configuration
	I0708 13:14:20.450822    5802 fix.go:54] fixHost starting: 
	I0708 13:14:20.450932    5802 fix.go:112] recreateIfNeeded on newest-cni-812000: state=Stopped err=<nil>
	W0708 13:14:20.450940    5802 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 13:14:20.455158    5802 out.go:177] * Restarting existing qemu2 VM for "newest-cni-812000" ...
	I0708 13:14:20.463259    5802 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/newest-cni-812000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/newest-cni-812000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/newest-cni-812000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:c6:e5:6d:05:f8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/newest-cni-812000/disk.qcow2
	I0708 13:14:20.465193    5802 main.go:141] libmachine: STDOUT: 
	I0708 13:14:20.465211    5802 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 13:14:20.465237    5802 fix.go:56] duration metric: took 14.416084ms for fixHost
	I0708 13:14:20.465242    5802 start.go:83] releasing machines lock for "newest-cni-812000", held for 14.432583ms
	W0708 13:14:20.465249    5802 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0708 13:14:20.465285    5802 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:14:20.465290    5802 start.go:728] Will try again in 5 seconds ...
	I0708 13:14:25.467313    5802 start.go:360] acquireMachinesLock for newest-cni-812000: {Name:mk1f21792edcf846bc4e08453589dd89c9c23696 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0708 13:14:25.467659    5802 start.go:364] duration metric: took 269.083µs to acquireMachinesLock for "newest-cni-812000"
	I0708 13:14:25.467786    5802 start.go:96] Skipping create...Using existing machine configuration
	I0708 13:14:25.467807    5802 fix.go:54] fixHost starting: 
	I0708 13:14:25.468530    5802 fix.go:112] recreateIfNeeded on newest-cni-812000: state=Stopped err=<nil>
	W0708 13:14:25.468556    5802 fix.go:138] unexpected machine state, will restart: <nil>
	I0708 13:14:25.476981    5802 out.go:177] * Restarting existing qemu2 VM for "newest-cni-812000" ...
	I0708 13:14:25.482118    5802 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/newest-cni-812000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/newest-cni-812000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/newest-cni-812000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:c6:e5:6d:05:f8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/newest-cni-812000/disk.qcow2
	I0708 13:14:25.490924    5802 main.go:141] libmachine: STDOUT: 
	I0708 13:14:25.490978    5802 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0708 13:14:25.491038    5802 fix.go:56] duration metric: took 23.23275ms for fixHost
	I0708 13:14:25.491056    5802 start.go:83] releasing machines lock for "newest-cni-812000", held for 23.373083ms
	W0708 13:14:25.491260    5802 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-812000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-812000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0708 13:14:25.498978    5802 out.go:177] 
	W0708 13:14:25.502978    5802 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0708 13:14:25.503003    5802 out.go:239] * 
	* 
	W0708 13:14:25.505587    5802 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 13:14:25.512954    5802 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-812000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-812000 -n newest-cni-812000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-812000 -n newest-cni-812000: exit status 7 (66.984458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-812000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-812000 image list --format=json
start_stop_delete_test.go:304: v1.30.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.2",
- 	"registry.k8s.io/kube-controller-manager:v1.30.2",
- 	"registry.k8s.io/kube-proxy:v1.30.2",
- 	"registry.k8s.io/kube-scheduler:v1.30.2",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-812000 -n newest-cni-812000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-812000 -n newest-cni-812000: exit status 7 (29.454292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-812000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-812000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-812000 --alsologtostderr -v=1: exit status 83 (41.107416ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-812000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-812000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 13:14:25.689617    5816 out.go:291] Setting OutFile to fd 1 ...
	I0708 13:14:25.689798    5816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:14:25.689801    5816 out.go:304] Setting ErrFile to fd 2...
	I0708 13:14:25.689803    5816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 13:14:25.689933    5816 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 13:14:25.690169    5816 out.go:298] Setting JSON to false
	I0708 13:14:25.690176    5816 mustload.go:65] Loading cluster: newest-cni-812000
	I0708 13:14:25.690370    5816 config.go:182] Loaded profile config "newest-cni-812000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 13:14:25.694515    5816 out.go:177] * The control-plane node newest-cni-812000 host is not running: state=Stopped
	I0708 13:14:25.698403    5816 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-812000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-812000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-812000 -n newest-cni-812000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-812000 -n newest-cni-812000: exit status 7 (29.100917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-812000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-812000 -n newest-cni-812000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-812000 -n newest-cni-812000: exit status 7 (29.923833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-812000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (156/279)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.30.2/json-events 6.54
13 TestDownloadOnly/v1.30.2/preload-exists 0
16 TestDownloadOnly/v1.30.2/kubectl 0
17 TestDownloadOnly/v1.30.2/LogsDuration 0.08
18 TestDownloadOnly/v1.30.2/DeleteAll 0.11
19 TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds 0.1
21 TestBinaryMirror 0.35
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 132.27
29 TestAddons/parallel/Registry 31.44
30 TestAddons/parallel/Ingress 18.21
31 TestAddons/parallel/InspektorGadget 10.21
32 TestAddons/parallel/MetricsServer 5.26
35 TestAddons/parallel/CSI 54.99
36 TestAddons/parallel/Headlamp 17.37
37 TestAddons/parallel/CloudSpanner 5.17
38 TestAddons/parallel/LocalPath 40.79
39 TestAddons/parallel/NvidiaDevicePlugin 5.16
40 TestAddons/parallel/Yakd 5
41 TestAddons/parallel/Volcano 37.9
44 TestAddons/serial/GCPAuth/Namespaces 0.07
45 TestAddons/StoppedEnableDisable 12.4
53 TestHyperKitDriverInstallOrUpdate 10.22
56 TestErrorSpam/setup 35.89
57 TestErrorSpam/start 0.34
58 TestErrorSpam/status 0.24
59 TestErrorSpam/pause 0.64
60 TestErrorSpam/unpause 0.6
61 TestErrorSpam/stop 64.27
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 51.25
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 37.49
68 TestFunctional/serial/KubeContext 0.03
69 TestFunctional/serial/KubectlGetPods 0.04
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.56
73 TestFunctional/serial/CacheCmd/cache/add_local 1.12
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.03
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
77 TestFunctional/serial/CacheCmd/cache/cache_reload 0.64
78 TestFunctional/serial/CacheCmd/cache/delete 0.07
79 TestFunctional/serial/MinikubeKubectlCmd 0.55
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.91
81 TestFunctional/serial/ExtraConfig 36.26
82 TestFunctional/serial/ComponentHealth 0.04
83 TestFunctional/serial/LogsCmd 0.66
84 TestFunctional/serial/LogsFileCmd 0.62
85 TestFunctional/serial/InvalidService 4.17
87 TestFunctional/parallel/ConfigCmd 0.22
88 TestFunctional/parallel/DashboardCmd 10.31
89 TestFunctional/parallel/DryRun 0.22
90 TestFunctional/parallel/InternationalLanguage 0.11
91 TestFunctional/parallel/StatusCmd 0.24
96 TestFunctional/parallel/AddonsCmd 0.1
97 TestFunctional/parallel/PersistentVolumeClaim 25.81
99 TestFunctional/parallel/SSHCmd 0.13
100 TestFunctional/parallel/CpCmd 0.44
102 TestFunctional/parallel/FileSync 0.07
103 TestFunctional/parallel/CertSync 0.47
107 TestFunctional/parallel/NodeLabels 0.04
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.12
111 TestFunctional/parallel/License 0.26
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.23
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.1
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
119 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
120 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
121 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
123 TestFunctional/parallel/ServiceCmd/DeployApp 6.08
124 TestFunctional/parallel/ServiceCmd/List 0.28
125 TestFunctional/parallel/ServiceCmd/JSONOutput 0.28
126 TestFunctional/parallel/ServiceCmd/HTTPS 0.1
127 TestFunctional/parallel/ServiceCmd/Format 0.1
128 TestFunctional/parallel/ServiceCmd/URL 0.1
129 TestFunctional/parallel/ProfileCmd/profile_not_create 0.13
130 TestFunctional/parallel/ProfileCmd/profile_list 0.12
131 TestFunctional/parallel/ProfileCmd/profile_json_output 0.13
132 TestFunctional/parallel/MountCmd/any-port 4.16
133 TestFunctional/parallel/MountCmd/specific-port 0.84
134 TestFunctional/parallel/MountCmd/VerifyCleanup 0.97
135 TestFunctional/parallel/Version/short 0.04
136 TestFunctional/parallel/Version/components 0.28
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.62
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.07
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.07
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.08
141 TestFunctional/parallel/ImageCommands/ImageBuild 1.55
142 TestFunctional/parallel/ImageCommands/Setup 1.31
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.05
144 TestFunctional/parallel/DockerEnv/bash 0.29
145 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
146 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
147 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
148 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.45
149 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.21
150 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.47
151 TestFunctional/parallel/ImageCommands/ImageRemove 0.16
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.63
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.52
154 TestFunctional/delete_addon-resizer_images 0.04
155 TestFunctional/delete_my-image_image 0.01
156 TestFunctional/delete_minikube_cached_images 0.01
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 112.45
181 TestImageBuild/serial/Setup 36.81
182 TestImageBuild/serial/NormalBuild 1.23
184 TestImageBuild/serial/BuildWithDockerIgnore 0.11
185 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.12
189 TestJSONOutput/start/Command 89.29
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.32
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.22
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 9.2
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.2
217 TestMainNoArgs 0.03
218 TestMinikubeProfile 69.99
264 TestStoppedBinaryUpgrade/Setup 1.03
276 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
280 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
281 TestNoKubernetes/serial/ProfileList 31.35
282 TestNoKubernetes/serial/Stop 3.55
284 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
294 TestStoppedBinaryUpgrade/MinikubeLogs 0.66
299 TestStartStop/group/old-k8s-version/serial/Stop 1.91
300 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
312 TestStartStop/group/no-preload/serial/Stop 3.03
313 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
321 TestStartStop/group/embed-certs/serial/Stop 3.86
324 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
332 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.13
335 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
337 TestStartStop/group/newest-cni/serial/DeployApp 0
339 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
341 TestStartStop/group/newest-cni/serial/Stop 3.42
344 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
346 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-385000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-385000: exit status 85 (93.100667ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-385000 | jenkins | v1.33.1 | 08 Jul 24 12:28 PDT |          |
	|         | -p download-only-385000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/08 12:28:20
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0708 12:28:20.694736    1769 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:28:20.694898    1769 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:28:20.694901    1769 out.go:304] Setting ErrFile to fd 2...
	I0708 12:28:20.694904    1769 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:28:20.695028    1769 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	W0708 12:28:20.695127    1769 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19195-1270/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19195-1270/.minikube/config/config.json: no such file or directory
	I0708 12:28:20.696409    1769 out.go:298] Setting JSON to true
	I0708 12:28:20.713648    1769 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1668,"bootTime":1720465232,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0708 12:28:20.713716    1769 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0708 12:28:20.718138    1769 out.go:97] [download-only-385000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0708 12:28:20.718287    1769 notify.go:220] Checking for updates...
	W0708 12:28:20.718317    1769 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball: no such file or directory
	I0708 12:28:20.721062    1769 out.go:169] MINIKUBE_LOCATION=19195
	I0708 12:28:20.724129    1769 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 12:28:20.729053    1769 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0708 12:28:20.732096    1769 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 12:28:20.735097    1769 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	W0708 12:28:20.741110    1769 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0708 12:28:20.741356    1769 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 12:28:20.746013    1769 out.go:97] Using the qemu2 driver based on user configuration
	I0708 12:28:20.746032    1769 start.go:297] selected driver: qemu2
	I0708 12:28:20.746045    1769 start.go:901] validating driver "qemu2" against <nil>
	I0708 12:28:20.746127    1769 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0708 12:28:20.747682    1769 out.go:169] Automatically selected the socket_vmnet network
	I0708 12:28:20.753648    1769 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0708 12:28:20.753737    1769 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0708 12:28:20.753807    1769 cni.go:84] Creating CNI manager for ""
	I0708 12:28:20.753823    1769 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0708 12:28:20.753877    1769 start.go:340] cluster config:
	{Name:download-only-385000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-385000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 12:28:20.759099    1769 iso.go:125] acquiring lock: {Name:mk0270d312faa6a295feea241390baaf586d8510 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 12:28:20.763098    1769 out.go:97] Downloading VM boot image ...
	I0708 12:28:20.763125    1769 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/iso/arm64/minikube-v1.33.1-1720011972-19186-arm64.iso
	I0708 12:28:25.393561    1769 out.go:97] Starting "download-only-385000" primary control-plane node in "download-only-385000" cluster
	I0708 12:28:25.393578    1769 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0708 12:28:25.469855    1769 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0708 12:28:25.469884    1769 cache.go:56] Caching tarball of preloaded images
	I0708 12:28:25.470064    1769 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0708 12:28:25.475205    1769 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0708 12:28:25.475214    1769 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0708 12:28:25.554617    1769 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0708 12:28:30.874696    1769 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0708 12:28:30.875205    1769 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0708 12:28:31.570751    1769 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0708 12:28:31.570956    1769 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/download-only-385000/config.json ...
	I0708 12:28:31.570974    1769 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/download-only-385000/config.json: {Name:mk6ade450131b0b9717451de9ef19a570a5c0fec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:28:31.571200    1769 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0708 12:28:31.571381    1769 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0708 12:28:31.986335    1769 out.go:169] 
	W0708 12:28:31.993828    1769 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19195-1270/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10491dac0 0x10491dac0 0x10491dac0 0x10491dac0 0x10491dac0 0x10491dac0 0x10491dac0] Decompressors:map[bz2:0x1400000ffa0 gz:0x1400000ffa8 tar:0x1400000ff50 tar.bz2:0x1400000ff60 tar.gz:0x1400000ff70 tar.xz:0x1400000ff80 tar.zst:0x1400000ff90 tbz2:0x1400000ff60 tgz:0x1400000ff70 txz:0x1400000ff80 tzst:0x1400000ff90 xz:0x1400000ffd0 zip:0x1400000ffe0 zst:0x1400000ffd8] Getters:map[file:0x140008f6b00 http:0x140007c81e0 https:0x140007c8230] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0708 12:28:31.993861    1769 out_reason.go:110] 
	W0708 12:28:32.000687    1769 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0708 12:28:32.004647    1769 out.go:169] 
	
	
	* The control-plane node download-only-385000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-385000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-385000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/json-events (6.54s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-060000 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-060000 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=docker --driver=qemu2 : (6.540651333s)
--- PASS: TestDownloadOnly/v1.30.2/json-events (6.54s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/preload-exists
--- PASS: TestDownloadOnly/v1.30.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/kubectl
--- PASS: TestDownloadOnly/v1.30.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-060000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-060000: exit status 85 (74.940875ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-385000 | jenkins | v1.33.1 | 08 Jul 24 12:28 PDT |                     |
	|         | -p download-only-385000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 08 Jul 24 12:28 PDT | 08 Jul 24 12:28 PDT |
	| delete  | -p download-only-385000        | download-only-385000 | jenkins | v1.33.1 | 08 Jul 24 12:28 PDT | 08 Jul 24 12:28 PDT |
	| start   | -o=json --download-only        | download-only-060000 | jenkins | v1.33.1 | 08 Jul 24 12:28 PDT |                     |
	|         | -p download-only-060000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/08 12:28:32
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0708 12:28:32.409577    1794 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:28:32.409715    1794 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:28:32.409719    1794 out.go:304] Setting ErrFile to fd 2...
	I0708 12:28:32.409721    1794 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:28:32.409867    1794 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:28:32.410881    1794 out.go:298] Setting JSON to true
	I0708 12:28:32.426866    1794 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1680,"bootTime":1720465232,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0708 12:28:32.426934    1794 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0708 12:28:32.431196    1794 out.go:97] [download-only-060000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0708 12:28:32.431308    1794 notify.go:220] Checking for updates...
	I0708 12:28:32.435136    1794 out.go:169] MINIKUBE_LOCATION=19195
	I0708 12:28:32.438184    1794 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 12:28:32.442249    1794 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0708 12:28:32.445201    1794 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 12:28:32.448139    1794 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	W0708 12:28:32.454083    1794 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0708 12:28:32.454226    1794 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 12:28:32.457135    1794 out.go:97] Using the qemu2 driver based on user configuration
	I0708 12:28:32.457145    1794 start.go:297] selected driver: qemu2
	I0708 12:28:32.457149    1794 start.go:901] validating driver "qemu2" against <nil>
	I0708 12:28:32.457219    1794 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0708 12:28:32.460184    1794 out.go:169] Automatically selected the socket_vmnet network
	I0708 12:28:32.463582    1794 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0708 12:28:32.463681    1794 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0708 12:28:32.463719    1794 cni.go:84] Creating CNI manager for ""
	I0708 12:28:32.463728    1794 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0708 12:28:32.463735    1794 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0708 12:28:32.463767    1794 start.go:340] cluster config:
	{Name:download-only-060000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:download-only-060000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 12:28:32.467280    1794 iso.go:125] acquiring lock: {Name:mk0270d312faa6a295feea241390baaf586d8510 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 12:28:32.470093    1794 out.go:97] Starting "download-only-060000" primary control-plane node in "download-only-060000" cluster
	I0708 12:28:32.470099    1794 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0708 12:28:32.528911    1794 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0708 12:28:32.528931    1794 cache.go:56] Caching tarball of preloaded images
	I0708 12:28:32.529083    1794 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0708 12:28:32.533490    1794 out.go:97] Downloading Kubernetes v1.30.2 preload ...
	I0708 12:28:32.533498    1794 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 ...
	I0708 12:28:32.615212    1794 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4?checksum=md5:3bd37d965c85173ac77cdcc664938efd -> /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0708 12:28:36.799551    1794 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 ...
	I0708 12:28:36.799712    1794 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 ...
	I0708 12:28:37.344309    1794 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0708 12:28:37.344517    1794 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/download-only-060000/config.json ...
	I0708 12:28:37.344533    1794 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/download-only-060000/config.json: {Name:mkb90c1179ed93fad0705bebb73d9c82172ff840 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 12:28:37.345391    1794 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0708 12:28:37.345530    1794 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19195-1270/.minikube/cache/darwin/arm64/v1.30.2/kubectl
	
	
	* The control-plane node download-only-060000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-060000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.2/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-060000
--- PASS: TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.35s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-320000 --alsologtostderr --binary-mirror http://127.0.0.1:49313 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-320000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-320000
--- PASS: TestBinaryMirror (0.35s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-443000
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-443000: exit status 85 (57.568917ms)

                                                
                                                
-- stdout --
	* Profile "addons-443000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-443000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-443000
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-443000: exit status 85 (53.465041ms)

                                                
                                                
-- stdout --
	* Profile "addons-443000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-443000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (132.27s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-443000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-darwin-arm64 start -p addons-443000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (2m12.267517833s)
--- PASS: TestAddons/Setup (132.27s)

                                                
                                    
x
+
TestAddons/parallel/Registry (31.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 7.308458ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-fhbdv" [0855e08d-88f8-4c8e-a8ad-095689153509] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002874459s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-hstm9" [abb7a72f-c76f-4e94-aaa4-67c3798a3d1b] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004331334s
addons_test.go:342: (dbg) Run:  kubectl --context addons-443000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-443000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-443000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (21.08415275s)
addons_test.go:361: (dbg) Run:  out/minikube-darwin-arm64 -p addons-443000 ip
2024/07/08 12:31:23 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p addons-443000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (31.44s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.21s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-443000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-443000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-443000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [327c2f75-d645-4500-8773-b27bcc2d8e00] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [327c2f75-d645-4500-8773-b27bcc2d8e00] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.002717792s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-arm64 -p addons-443000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-443000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-arm64 -p addons-443000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p addons-443000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p addons-443000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p addons-443000 addons disable ingress --alsologtostderr -v=1: (7.212627459s)
--- PASS: TestAddons/parallel/Ingress (18.21s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.21s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-5rm9p" [38094f06-2246-4aab-b1aa-8cbddf8efb71] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004389083s
addons_test.go:843: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-443000
addons_test.go:843: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-443000: (5.209420708s)
--- PASS: TestAddons/parallel/InspektorGadget (10.21s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.401375ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-fkpg4" [ac6c437b-ab0f-4108-85c4-9e623fe36086] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004176667s
addons_test.go:417: (dbg) Run:  kubectl --context addons-443000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-arm64 -p addons-443000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.26s)

                                                
                                    
x
+
TestAddons/parallel/CSI (54.99s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 2.832917ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-443000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-443000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [b44b5b9d-47cf-4d85-8eff-cf874d936aae] Pending
helpers_test.go:344: "task-pv-pod" [b44b5b9d-47cf-4d85-8eff-cf874d936aae] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [b44b5b9d-47cf-4d85-8eff-cf874d936aae] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003833583s
addons_test.go:586: (dbg) Run:  kubectl --context addons-443000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-443000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-443000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-443000 delete pod task-pv-pod
addons_test.go:602: (dbg) Run:  kubectl --context addons-443000 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-443000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-443000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [4eacf71b-da1f-4680-9b62-1d3fb1eb6d11] Pending
helpers_test.go:344: "task-pv-pod-restore" [4eacf71b-da1f-4680-9b62-1d3fb1eb6d11] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [4eacf71b-da1f-4680-9b62-1d3fb1eb6d11] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.0043205s
addons_test.go:628: (dbg) Run:  kubectl --context addons-443000 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-443000 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-443000 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-darwin-arm64 -p addons-443000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-darwin-arm64 -p addons-443000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.083194209s)
addons_test.go:644: (dbg) Run:  out/minikube-darwin-arm64 -p addons-443000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (54.99s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.37s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-443000 --alsologtostderr -v=1
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-9lwp7" [1a6f8059-5b27-4001-be99-49b2124714a3] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-9lwp7" [1a6f8059-5b27-4001-be99-49b2124714a3] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 17.005229166s
--- PASS: TestAddons/parallel/Headlamp (17.37s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.17s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-wl6fd" [571c01d4-35c4-42a1-9333-5894d3278204] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003835167s
addons_test.go:862: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-443000
--- PASS: TestAddons/parallel/CloudSpanner (5.17s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (40.79s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-443000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-443000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [7629f1b6-7e79-4bbd-a3d7-e90a6e92c58d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [7629f1b6-7e79-4bbd-a3d7-e90a6e92c58d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [7629f1b6-7e79-4bbd-a3d7-e90a6e92c58d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.009020375s
addons_test.go:992: (dbg) Run:  kubectl --context addons-443000 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-darwin-arm64 -p addons-443000 ssh "cat /opt/local-path-provisioner/pvc-cf7fea5e-c146-4a4f-baca-7446028d4320_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-443000 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-443000 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-darwin-arm64 -p addons-443000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-darwin-arm64 -p addons-443000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (31.309655416s)
--- PASS: TestAddons/parallel/LocalPath (40.79s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.16s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-2fsvf" [c6c12172-ef21-468e-8335-d67f95e7714b] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.002043333s
addons_test.go:1056: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-443000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.16s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-q4ks7" [1fbfd819-acbd-4524-a06e-d2c4b08a5db2] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003618792s
--- PASS: TestAddons/parallel/Yakd (5.00s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (37.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:905: volcano-controller stabilized in 1.2255ms
addons_test.go:897: volcano-admission stabilized in 1.348959ms
addons_test.go:889: volcano-scheduler stabilized in 1.54ms
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-9l4tm" [92be6663-f774-4bb5-8bd0-8db4dd877d50] Running
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: app=volcano-scheduler healthy within 5.003772375s
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-k8j24" [04f58a1d-aa92-4a42-af14-e24d2fa62551] Running
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: app=volcano-admission healthy within 5.002349667s
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-px2n8" [0f4af80c-3f32-441e-af9f-8bc80aa8b6c3] Running
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: app=volcano-controller healthy within 5.004116709s
addons_test.go:924: (dbg) Run:  kubectl --context addons-443000 delete -n volcano-system job volcano-admission-init
addons_test.go:930: (dbg) Run:  kubectl --context addons-443000 create -f testdata/vcjob.yaml
addons_test.go:938: (dbg) Run:  kubectl --context addons-443000 get vcjob -n my-volcano
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [5ecfbc86-6b4e-4135-89cb-f317bfcfefcf] Pending
helpers_test.go:344: "test-job-nginx-0" [5ecfbc86-6b4e-4135-89cb-f317bfcfefcf] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [5ecfbc86-6b4e-4135-89cb-f317bfcfefcf] Running
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: volcano.sh/job-name=test-job healthy within 13.004210583s
addons_test.go:960: (dbg) Run:  out/minikube-darwin-arm64 -p addons-443000 addons disable volcano --alsologtostderr -v=1
addons_test.go:960: (dbg) Done: out/minikube-darwin-arm64 -p addons-443000 addons disable volcano --alsologtostderr -v=1: (9.687280917s)
--- PASS: TestAddons/parallel/Volcano (37.90s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-443000 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-443000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-443000
addons_test.go:174: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-443000: (12.21127025s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-443000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-443000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-443000
--- PASS: TestAddons/StoppedEnableDisable (12.40s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.22s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.22s)

                                                
                                    
x
+
TestErrorSpam/setup (35.89s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-329000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-329000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-329000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-329000 --driver=qemu2 : (35.884886167s)
--- PASS: TestErrorSpam/setup (35.89s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-329000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-329000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-329000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-329000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-329000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-329000 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.24s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-329000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-329000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-329000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-329000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-329000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-329000 status
--- PASS: TestErrorSpam/status (0.24s)

                                                
                                    
x
+
TestErrorSpam/pause (0.64s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-329000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-329000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-329000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-329000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-329000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-329000 pause
--- PASS: TestErrorSpam/pause (0.64s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.6s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-329000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-329000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-329000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-329000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-329000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-329000 unpause
--- PASS: TestErrorSpam/unpause (0.60s)

                                                
                                    
x
+
TestErrorSpam/stop (64.27s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-329000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-329000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-329000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-329000 stop: (12.2029195s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-329000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-329000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-329000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-329000 stop: (26.02916125s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-329000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-329000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-329000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-329000 stop: (26.034164958s)
--- PASS: TestErrorSpam/stop (64.27s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19195-1270/.minikube/files/etc/test/nested/copy/1767/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (51.25s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-183000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-arm64 start -p functional-183000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (51.252850083s)
--- PASS: TestFunctional/serial/StartWithProxy (51.25s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.49s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-183000 --alsologtostderr -v=8
E0708 12:35:52.073815    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/addons-443000/client.crt: no such file or directory
E0708 12:35:52.080800    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/addons-443000/client.crt: no such file or directory
E0708 12:35:52.092862    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/addons-443000/client.crt: no such file or directory
E0708 12:35:52.114927    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/addons-443000/client.crt: no such file or directory
E0708 12:35:52.156959    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/addons-443000/client.crt: no such file or directory
E0708 12:35:52.239015    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/addons-443000/client.crt: no such file or directory
E0708 12:35:52.400741    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/addons-443000/client.crt: no such file or directory
E0708 12:35:52.722225    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/addons-443000/client.crt: no such file or directory
E0708 12:35:53.363135    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/addons-443000/client.crt: no such file or directory
E0708 12:35:54.643595    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/addons-443000/client.crt: no such file or directory
E0708 12:35:57.205654    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/addons-443000/client.crt: no such file or directory
E0708 12:36:02.328084    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/addons-443000/client.crt: no such file or directory
E0708 12:36:12.570046    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/addons-443000/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-darwin-arm64 start -p functional-183000 --alsologtostderr -v=8: (37.48885375s)
functional_test.go:659: soft start took 37.489253833s for "functional-183000" cluster.
--- PASS: TestFunctional/serial/SoftStart (37.49s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-183000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-183000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local2049477948/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 cache add minikube-local-cache-test:functional-183000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 cache delete minikube-local-cache-test:functional-183000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-183000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-183000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (71.306167ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 kubectl -- --context functional-183000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.55s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.91s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-183000 get pods
E0708 12:36:33.051749    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/addons-443000/client.crt: no such file or directory
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.91s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-183000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-darwin-arm64 start -p functional-183000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.255158416s)
functional_test.go:757: restart took 36.255268791s for "functional-183000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (36.26s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-183000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.62s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd665370951/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.62s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.17s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-183000 apply -f testdata/invalidsvc.yaml
E0708 12:37:14.013100    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/addons-443000/client.crt: no such file or directory
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-183000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-183000: exit status 115 (102.945458ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:32211 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-183000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.17s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-183000 config get cpus: exit status 14 (29.064083ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-183000 config get cpus: exit status 14 (34.2025ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-183000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-183000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2439: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.31s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-183000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-183000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (113.698916ms)

                                                
                                                
-- stdout --
	* [functional-183000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 12:37:55.554449    2426 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:37:55.554581    2426 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:37:55.554584    2426 out.go:304] Setting ErrFile to fd 2...
	I0708 12:37:55.554587    2426 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:37:55.554706    2426 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:37:55.555769    2426 out.go:298] Setting JSON to false
	I0708 12:37:55.572242    2426 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2243,"bootTime":1720465232,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0708 12:37:55.572312    2426 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0708 12:37:55.577322    2426 out.go:177] * [functional-183000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0708 12:37:55.584278    2426 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 12:37:55.584352    2426 notify.go:220] Checking for updates...
	I0708 12:37:55.591329    2426 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 12:37:55.594294    2426 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0708 12:37:55.597350    2426 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 12:37:55.600257    2426 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	I0708 12:37:55.603311    2426 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 12:37:55.606648    2426 config.go:182] Loaded profile config "functional-183000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:37:55.606890    2426 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 12:37:55.610192    2426 out.go:177] * Using the qemu2 driver based on existing profile
	I0708 12:37:55.617295    2426 start.go:297] selected driver: qemu2
	I0708 12:37:55.617301    2426 start.go:901] validating driver "qemu2" against &{Name:functional-183000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:functional-183000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 12:37:55.617343    2426 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 12:37:55.623228    2426 out.go:177] 
	W0708 12:37:55.627329    2426 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0708 12:37:55.631282    2426 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-183000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-183000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-183000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (113.652833ms)

                                                
                                                
-- stdout --
	* [functional-183000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 12:37:55.436376    2422 out.go:291] Setting OutFile to fd 1 ...
	I0708 12:37:55.436474    2422 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:37:55.436477    2422 out.go:304] Setting ErrFile to fd 2...
	I0708 12:37:55.436479    2422 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0708 12:37:55.436604    2422 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
	I0708 12:37:55.438041    2422 out.go:298] Setting JSON to false
	I0708 12:37:55.455283    2422 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2243,"bootTime":1720465232,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0708 12:37:55.455371    2422 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0708 12:37:55.459390    2422 out.go:177] * [functional-183000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0708 12:37:55.467369    2422 out.go:177]   - MINIKUBE_LOCATION=19195
	I0708 12:37:55.467441    2422 notify.go:220] Checking for updates...
	I0708 12:37:55.475245    2422 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	I0708 12:37:55.479132    2422 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0708 12:37:55.482303    2422 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0708 12:37:55.485311    2422 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	I0708 12:37:55.488334    2422 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0708 12:37:55.491581    2422 config.go:182] Loaded profile config "functional-183000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0708 12:37:55.491826    2422 driver.go:392] Setting default libvirt URI to qemu:///system
	I0708 12:37:55.496351    2422 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0708 12:37:55.503290    2422 start.go:297] selected driver: qemu2
	I0708 12:37:55.503297    2422 start.go:901] validating driver "qemu2" against &{Name:functional-183000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19186/minikube-v1.33.1-1720011972-19186-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720012048-19186@sha256:0fc826bca29cbb5a8335aaf40b2c19a34f0a6b85133ca47842ce6e575d3bc2ef Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:functional-183000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0708 12:37:55.503353    2422 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0708 12:37:55.509348    2422 out.go:177] 
	W0708 12:37:55.513340    2422 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0708 12:37:55.517366    2422 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [6f38e6e0-0617-48cc-b6a1-de3a53eb9e90] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004058333s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-183000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-183000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-183000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-183000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [daa9cf94-a50b-47f7-8121-0dbffc1811c2] Pending
helpers_test.go:344: "sp-pod" [daa9cf94-a50b-47f7-8121-0dbffc1811c2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [daa9cf94-a50b-47f7-8121-0dbffc1811c2] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003720042s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-183000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-183000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-183000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6ae1f459-494f-461a-bf9b-353879f38171] Pending
helpers_test.go:344: "sp-pod" [6ae1f459-494f-461a-bf9b-353879f38171] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6ae1f459-494f-461a-bf9b-353879f38171] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003829s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-183000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.81s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 ssh -n functional-183000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 cp functional-183000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd986576891/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 ssh -n functional-183000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 ssh -n functional-183000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1767/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 ssh "sudo cat /etc/test/nested/copy/1767/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1767.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 ssh "sudo cat /etc/ssl/certs/1767.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1767.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 ssh "sudo cat /usr/share/ca-certificates/1767.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/17672.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 ssh "sudo cat /etc/ssl/certs/17672.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/17672.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 ssh "sudo cat /usr/share/ca-certificates/17672.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-183000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-183000 ssh "sudo systemctl is-active crio": exit status 1 (118.233417ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-183000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-183000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-183000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2280: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-183000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-183000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-183000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [bcbc79b8-de13-4d11-9c0b-05c6e6b3dd0a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [bcbc79b8-de13-4d11-9c0b-05c6e6b3dd0a] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.001738375s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-183000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.104.184.105 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-183000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-183000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-183000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-65f5d5cc78-gvmdh" [1dc619cd-7132-4565-9cbb-24424367782c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-65f5d5cc78-gvmdh" [1dc619cd-7132-4565-9cbb-24424367782c] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.004268667s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 service list -o json
functional_test.go:1490: Took "278.266083ms" to run "out/minikube-darwin-arm64 -p functional-183000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.105.4:32227
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.105.4:32227
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "85.868667ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "34.663792ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "90.414417ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "34.543875ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (4.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-183000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port4196396720/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1720467469201942000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port4196396720/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1720467469201942000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port4196396720/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1720467469201942000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port4196396720/001/test-1720467469201942000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-183000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (60.123917ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul  8 19:37 created-by-test
-rw-r--r-- 1 docker docker 24 Jul  8 19:37 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul  8 19:37 test-1720467469201942000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 ssh cat /mount-9p/test-1720467469201942000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-183000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [aeffb938-1400-4c5d-927d-0db6653abfde] Pending
helpers_test.go:344: "busybox-mount" [aeffb938-1400-4c5d-927d-0db6653abfde] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [aeffb938-1400-4c5d-927d-0db6653abfde] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [aeffb938-1400-4c5d-927d-0db6653abfde] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.003501708s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-183000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-183000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port4196396720/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (4.16s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-183000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port385089788/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-183000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (62.00275ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-183000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port385089788/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-183000 ssh "sudo umount -f /mount-9p": exit status 1 (61.236333ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-183000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-183000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port385089788/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-183000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup551917233/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-183000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup551917233/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-183000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup551917233/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-183000 ssh "findmnt -T" /mount1: exit status 1 (84.095375ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-183000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-183000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup551917233/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-183000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup551917233/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-183000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup551917233/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-183000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.2
registry.k8s.io/kube-proxy:v1.30.2
registry.k8s.io/kube-controller-manager:v1.30.2
registry.k8s.io/kube-apiserver:v1.30.2
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-183000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-183000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-183000 image ls --format short --alsologtostderr:
I0708 12:38:14.052795    2586 out.go:291] Setting OutFile to fd 1 ...
I0708 12:38:14.052976    2586 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0708 12:38:14.052979    2586 out.go:304] Setting ErrFile to fd 2...
I0708 12:38:14.052982    2586 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0708 12:38:14.053132    2586 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
I0708 12:38:14.053592    2586 config.go:182] Loaded profile config "functional-183000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0708 12:38:14.053655    2586 config.go:182] Loaded profile config "functional-183000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0708 12:38:14.053936    2586 retry.go:31] will retry after 534.967993ms: connect: dial unix /Users/jenkins/minikube-integration/19195-1270/.minikube/machines/functional-183000/monitor: connect: connection refused
I0708 12:38:14.591854    2586 ssh_runner.go:195] Run: systemctl --version
I0708 12:38:14.591881    2586 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/functional-183000/id_rsa Username:docker}
I0708 12:38:14.619636    2586 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-183000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/nginx                     | alpine            | 5461b18aaccf3 | 44.8MB |
| registry.k8s.io/kube-apiserver              | v1.30.2           | 84c601f3f72c8 | 112MB  |
| registry.k8s.io/kube-scheduler              | v1.30.2           | c7dd04b1bafeb | 60.5MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| docker.io/library/minikube-local-cache-test | functional-183000 | 507791e83cf63 | 30B    |
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/nginx                     | latest            | 443d199e8bfcc | 193MB  |
| registry.k8s.io/kube-controller-manager     | v1.30.2           | e1dcc3400d3ea | 107MB  |
| registry.k8s.io/kube-proxy                  | v1.30.2           | 66dbb96a9149f | 87.9MB |
| registry.k8s.io/etcd                        | 3.5.12-0          | 014faa467e297 | 139MB  |
| gcr.io/google-containers/addon-resizer      | functional-183000 | ffd4cfbbe753e | 32.9MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-183000 image ls --format table --alsologtostderr:
I0708 12:38:14.737090    2597 out.go:291] Setting OutFile to fd 1 ...
I0708 12:38:14.737250    2597 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0708 12:38:14.737253    2597 out.go:304] Setting ErrFile to fd 2...
I0708 12:38:14.737255    2597 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0708 12:38:14.737384    2597 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
I0708 12:38:14.737815    2597 config.go:182] Loaded profile config "functional-183000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0708 12:38:14.737871    2597 config.go:182] Loaded profile config "functional-183000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0708 12:38:14.738667    2597 ssh_runner.go:195] Run: systemctl --version
I0708 12:38:14.738675    2597 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/functional-183000/id_rsa Username:docker}
I0708 12:38:14.765558    2597 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-183000 image ls --format json --alsologtostderr:
[{"id":"66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.2"],"size":"87900000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-183000"],"size":"32900000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"84c601f3f72c87776cdcf77a73329d1f452
97e43a92508b0f289fa2fcf8872a0","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.2"],"size":"112000000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"57400000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTag
s":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"507791e83cf63b08b459fe82d3b12ebfacf6164d57e92c4595127c792387a8d5","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-183000"],"size":"30"},{"id":"443d199e8bfcce69c2aa494b36b5f8b04c3b183277cd19190e9589fd8552d618","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"5461b18aaccf366faf9fba071a5f1ac333cd13435366b32c5e9b8ec903fa18a1","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"44800000"},{"id":"e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.2"],"size":"107000000"},{"id":"c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.2"],"size":"60500000"},{"id":"014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.
12-0"],"size":"139000000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-183000 image ls --format json --alsologtostderr:
I0708 12:38:14.663712    2595 out.go:291] Setting OutFile to fd 1 ...
I0708 12:38:14.663852    2595 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0708 12:38:14.663855    2595 out.go:304] Setting ErrFile to fd 2...
I0708 12:38:14.663858    2595 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0708 12:38:14.664006    2595 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
I0708 12:38:14.664437    2595 config.go:182] Loaded profile config "functional-183000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0708 12:38:14.664508    2595 config.go:182] Loaded profile config "functional-183000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0708 12:38:14.665386    2595 ssh_runner.go:195] Run: systemctl --version
I0708 12:38:14.665394    2595 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/functional-183000/id_rsa Username:docker}
I0708 12:38:14.693136    2595 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-183000 image ls --format yaml --alsologtostderr:
- id: 5461b18aaccf366faf9fba071a5f1ac333cd13435366b32c5e9b8ec903fa18a1
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "44800000"
- id: 66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.2
size: "87900000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 443d199e8bfcce69c2aa494b36b5f8b04c3b183277cd19190e9589fd8552d618
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.2
size: "112000000"
- id: 014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "139000000"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 507791e83cf63b08b459fe82d3b12ebfacf6164d57e92c4595127c792387a8d5
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-183000
size: "30"
- id: c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.2
size: "60500000"
- id: e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.2
size: "107000000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-183000
size: "32900000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-183000 image ls --format yaml --alsologtostderr:
I0708 12:38:14.052751    2587 out.go:291] Setting OutFile to fd 1 ...
I0708 12:38:14.052919    2587 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0708 12:38:14.052923    2587 out.go:304] Setting ErrFile to fd 2...
I0708 12:38:14.052925    2587 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0708 12:38:14.053065    2587 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
I0708 12:38:14.053568    2587 config.go:182] Loaded profile config "functional-183000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0708 12:38:14.053629    2587 config.go:182] Loaded profile config "functional-183000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0708 12:38:14.054444    2587 ssh_runner.go:195] Run: systemctl --version
I0708 12:38:14.054456    2587 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/functional-183000/id_rsa Username:docker}
I0708 12:38:14.081835    2587 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-183000 ssh pgrep buildkitd: exit status 1 (61.439417ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 image build -t localhost/my-image:functional-183000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-arm64 -p functional-183000 image build -t localhost/my-image:functional-183000 testdata/build --alsologtostderr: (1.418121959s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-183000 image build -t localhost/my-image:functional-183000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
a01966dde7f8: Pulling fs layer
a01966dde7f8: Verifying Checksum
a01966dde7f8: Download complete
a01966dde7f8: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> 71a676dd070f
Step 2/3 : RUN true
---> Running in 18115e1f5436
---> Removed intermediate container 18115e1f5436
---> 960b25ee07ab
Step 3/3 : ADD content.txt /
---> 4df19756ff47
Successfully built 4df19756ff47
Successfully tagged localhost/my-image:functional-183000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-183000 image build -t localhost/my-image:functional-183000 testdata/build --alsologtostderr:
I0708 12:38:14.187585    2592 out.go:291] Setting OutFile to fd 1 ...
I0708 12:38:14.187818    2592 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0708 12:38:14.187825    2592 out.go:304] Setting ErrFile to fd 2...
I0708 12:38:14.187828    2592 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0708 12:38:14.187965    2592 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19195-1270/.minikube/bin
I0708 12:38:14.188406    2592 config.go:182] Loaded profile config "functional-183000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0708 12:38:14.189114    2592 config.go:182] Loaded profile config "functional-183000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0708 12:38:14.189973    2592 ssh_runner.go:195] Run: systemctl --version
I0708 12:38:14.189982    2592 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19195-1270/.minikube/machines/functional-183000/id_rsa Username:docker}
I0708 12:38:14.218514    2592 build_images.go:161] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2829258100.tar
I0708 12:38:14.218566    2592 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0708 12:38:14.222387    2592 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2829258100.tar
I0708 12:38:14.223876    2592 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2829258100.tar: stat -c "%s %y" /var/lib/minikube/build/build.2829258100.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2829258100.tar': No such file or directory
I0708 12:38:14.223892    2592 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2829258100.tar --> /var/lib/minikube/build/build.2829258100.tar (3072 bytes)
I0708 12:38:14.231821    2592 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2829258100
I0708 12:38:14.235092    2592 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2829258100 -xf /var/lib/minikube/build/build.2829258100.tar
I0708 12:38:14.238176    2592 docker.go:360] Building image: /var/lib/minikube/build/build.2829258100
I0708 12:38:14.238218    2592 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-183000 /var/lib/minikube/build/build.2829258100
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0708 12:38:15.562559    2592 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-183000 /var/lib/minikube/build/build.2829258100: (1.324358541s)
I0708 12:38:15.562636    2592 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2829258100
I0708 12:38:15.566267    2592 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2829258100.tar
I0708 12:38:15.569675    2592 build_images.go:217] Built localhost/my-image:functional-183000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2829258100.tar
I0708 12:38:15.569696    2592 build_images.go:133] succeeded building to: functional-183000
I0708 12:38:15.569700    2592 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.29164s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-183000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 image load --daemon gcr.io/google-containers/addon-resizer:functional-183000 --alsologtostderr
2024/07/08 12:38:05 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-183000 image load --daemon gcr.io/google-containers/addon-resizer:functional-183000 --alsologtostderr: (1.972496917s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.05s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-183000 docker-env) && out/minikube-darwin-arm64 status -p functional-183000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-183000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 image load --daemon gcr.io/google-containers/addon-resizer:functional-183000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-183000 image load --daemon gcr.io/google-containers/addon-resizer:functional-183000 --alsologtostderr: (1.373243709s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.287378875s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-183000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 image load --daemon gcr.io/google-containers/addon-resizer:functional-183000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-183000 image load --daemon gcr.io/google-containers/addon-resizer:functional-183000 --alsologtostderr: (1.837979833s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 image save gcr.io/google-containers/addon-resizer:functional-183000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 image rm gcr.io/google-containers/addon-resizer:functional-183000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-183000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-183000 image save --daemon gcr.io/google-containers/addon-resizer:functional-183000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-183000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.52s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-183000
--- PASS: TestFunctional/delete_addon-resizer_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-183000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-183000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (112.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-881000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-881000 -v=7 --alsologtostderr
E0708 12:42:16.065167    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/functional-183000/client.crt: no such file or directory
E0708 12:42:16.071676    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/functional-183000/client.crt: no such file or directory
E0708 12:42:16.082730    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/functional-183000/client.crt: no such file or directory
E0708 12:42:16.104844    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/functional-183000/client.crt: no such file or directory
E0708 12:42:16.146945    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/functional-183000/client.crt: no such file or directory
E0708 12:42:16.229112    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/functional-183000/client.crt: no such file or directory
E0708 12:42:16.390797    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/functional-183000/client.crt: no such file or directory
E0708 12:42:16.712962    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/functional-183000/client.crt: no such file or directory
E0708 12:42:17.355524    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/functional-183000/client.crt: no such file or directory
E0708 12:42:18.637898    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/functional-183000/client.crt: no such file or directory
E0708 12:42:21.200348    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/functional-183000/client.crt: no such file or directory
E0708 12:42:26.322761    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/functional-183000/client.crt: no such file or directory
E0708 12:42:36.564925    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/functional-183000/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-881000 -v=7 --alsologtostderr: (54.269349083s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-881000 --wait=true -v=7 --alsologtostderr
E0708 12:42:57.047441    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/functional-183000/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-darwin-arm64 start -p ha-881000 --wait=true -v=7 --alsologtostderr: (58.121310375s)
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-881000
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (112.45s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (36.81s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-095000 --driver=qemu2 
E0708 12:52:15.101775    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/addons-443000/client.crt: no such file or directory
E0708 12:52:16.033160    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/functional-183000/client.crt: no such file or directory
image_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -p image-095000 --driver=qemu2 : (36.81090025s)
--- PASS: TestImageBuild/serial/Setup (36.81s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.23s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-095000
image_test.go:78: (dbg) Done: out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-095000: (1.231857834s)
--- PASS: TestImageBuild/serial/NormalBuild (1.23s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.11s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-095000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.11s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.12s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-095000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.12s)

                                                
                                    
x
+
TestJSONOutput/start/Command (89.29s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-122000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 start -p json-output-122000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : (1m29.288022125s)
--- PASS: TestJSONOutput/start/Command (89.29s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.32s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-122000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.32s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.22s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-122000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.22s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (9.2s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-122000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-122000 --output=json --user=testUser: (9.195855208s)
--- PASS: TestJSONOutput/stop/Command (9.20s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-007000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-007000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (92.810958ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d6d567ea-b014-4ac2-9e98-beedaf0f1d39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-007000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7e5f604b-645a-48cd-888b-177b795d78bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19195"}}
	{"specversion":"1.0","id":"33585e24-f8ae-4bd0-84dd-2e00c0fad1c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig"}}
	{"specversion":"1.0","id":"26a2443d-d040-4e37-9973-8ea0493b2d27","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"31483a7e-d7e6-4b36-a5c1-a773221c64e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5894551e-0db8-469b-b1e1-43847bb96e2a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube"}}
	{"specversion":"1.0","id":"7bf69bd8-e902-45ee-afc5-1b92aeb1b6fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1278a9c2-4ae5-49a8-9010-6eee82cedf6d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-007000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-007000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestMinikubeProfile (69.99s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-844000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p first-844000 --driver=qemu2 : (34.927312334s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p second-846000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p second-846000 --driver=qemu2 : (34.409475208s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile first-844000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile second-846000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-846000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-846000
helpers_test.go:175: Cleaning up "first-844000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-844000
--- PASS: TestMinikubeProfile (69.99s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.03s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-088000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-088000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (96.904625ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-088000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19195
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19195-1270/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19195-1270/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-088000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-088000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (39.621ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-088000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-088000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
E0708 13:10:51.983378    1767 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19195-1270/.minikube/profiles/addons-443000/client.crt: no such file or directory
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.733827583s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.613594125s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-088000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-088000: (3.55022075s)
--- PASS: TestNoKubernetes/serial/Stop (3.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-088000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-088000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (40.094334ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-088000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-088000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.66s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-170000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-376000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-376000 --alsologtostderr -v=3: (1.912852958s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-376000 -n old-k8s-version-376000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-376000 -n old-k8s-version-376000: exit status 7 (53.389834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-376000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-172000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-172000 --alsologtostderr -v=3: (3.031534458s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-172000 -n no-preload-172000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-172000 -n no-preload-172000: exit status 7 (58.81675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-172000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-604000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-604000 --alsologtostderr -v=3: (3.856147583s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-604000 -n embed-certs-604000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-604000 -n embed-certs-604000: exit status 7 (59.404666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-604000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-601000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-601000 --alsologtostderr -v=3: (3.132412458s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-601000 -n default-k8s-diff-port-601000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-601000 -n default-k8s-diff-port-601000: exit status 7 (66.19ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-601000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-812000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-812000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-812000 --alsologtostderr -v=3: (3.421686291s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-812000 -n newest-cni-812000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-812000 -n newest-cni-812000: exit status 7 (65.308084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-812000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (22/279)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-305000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-305000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-305000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-305000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-305000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-305000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-305000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-305000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-305000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-305000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-305000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-305000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-305000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-305000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-305000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-305000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-305000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-305000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-305000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-305000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-305000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-305000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-305000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-305000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-305000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-305000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-305000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-305000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-305000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-305000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-305000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-305000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-305000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-305000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-305000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-305000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-305000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-305000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-305000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-305000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-305000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-305000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-305000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-305000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-305000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-305000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-305000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-305000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-305000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-305000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-305000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-305000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-305000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-305000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-305000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-305000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-305000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-305000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-305000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-305000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-305000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-305000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-305000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-305000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-305000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-305000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-305000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-305000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-305000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-305000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-305000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-305000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-305000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-305000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-305000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-305000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-305000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-305000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-305000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-305000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-305000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-305000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-305000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-305000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-305000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-305000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-305000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-305000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-305000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-305000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-305000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-305000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-305000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-305000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-305000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-305000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-305000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-305000"

                                                
                                                
----------------------- debugLogs end: cilium-305000 [took: 2.170457708s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-305000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-305000
--- SKIP: TestNetworkPlugins/group/cilium (2.27s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-795000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-795000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.10s)

                                                
                                    
Copied to clipboard